Sunday, June 30, 2019

aspect ratio - Best base resolution for a pixelart project for bigger audience



I've been working on Top-down 2D pixel art project for quite some time. When I first started my project, I actually set application surface, display and game window to match 1:1 - with that said, the game performed well and project looked pixel-perfect without tearing or anything... BUT, I'm not making Age of Empires or Sim City clone, where as bigger display you got, as more you see, but as smaller things get. So, I rewrote my resolution script completely to a fixed size...


Now, the thing is:




  • Images/sprites are mostly between 32x32 and 64x64


  • I need to figure out a good base resolution for bigger audience (like Steam, community etc.)


    The game can't have infinitely large view, because character shouldn't see too far and at the same time be barely visible on large monitors



  • With that said, it has to look +/- ok on different display resolutions, but attain best quality using aspect ratio function, even if that means black borders around the view

  • Game has a configuration page for different resolutions, but that will affect only how much the stuff has to stretch etc. and is not currently used from user interface (testing from in-game console only)


The question basically is:




  • Best - 640x480 VS 800x600?

  • What base resolution "Hotline Miami" videogame might be considered to have?

  • Does the whole resolution concept really matter and I should worry about user experience with this?


This whole thing a bit reminds me of Fallout 2 with locked "2x scaling on" on high-res, which basically did similarly to what I'm going forward to...


For a better understanding of what I'm trying to say, here's a visual aspect of this (Green line is basically the border of the room, but due to the display resolution, you can see past the border):


BEFORE Infinite view


AFTER Fixed size view


** The "after" image doesn't use aspect ratio in this picture, but instead shows past the actual supposed viewing frame, this is why you don't see black boxes on sides... this was later on fixed though





xna - Draw many flashlights' focused lights' "circles" on a voxel engine map (and other objects)


So, I've been searching the internet for a while and I still didn't understand how to create a flashlight's "circle" in front of the camera over other objects.


I found the following links:


using shader;


multi pass effect;


forum question about multiple passes and multiple effects;


shader example with multiple lights (which I can't open because its an old project).



And they give me examples and solutions about how to draw lights or other effects on objects.


But it's not clear to me yet. I'm newbie with HLSL (and still a little with XNA itself) and I don't get what should I do. I need help with the steps I should take to modify my HLSL effect* of the voxel engine to draw a white circle where the cursor points and shine the light around the camera position as the flashlight is on.


*I created a voxel engine that uses a effect (modified a little) I found in a project called TechCraft. It changes the light on the map depending of the time of the day and the sun position. But I'm not sure how it works neither, I just read a little but I'm not used to HLSL as I said. (Where could I find good HLSL tutorials?)


Do I need to make many passes and make a pixel shader and a vertex shader for each one? Or should I create many techniques? What's the difference?


Or is that a bad idea? Should I create another effect and draw again the voxel with its technique and pass?


And maybe it's not only one light. If I'm able to create a multiplayer someday, or NPCs, the game will have more than one light on at the same time. How to draw more than one circle?


I'm really lost with this, and I have no idea where should I start from. I appreciate any help; even showing already answered questions is helpful to me.



Answer




Where could I find good HLSL tutorials?




Reimer's


Two ways off the top of my head:


Light volume
After rendering, switch to a flashlight shader and draw a cone (some triangles), with the tip located at the camera's position and the circular base pointing in the camera's forward direction, with arbitrary dimensions. You literally render a cone containing the volume you want lit and apply the flashlight "effect" to the affected pixels.


3D "Stencil"
Obtain a circular stencil texture from Paint (transparent, with centered white circle, blurred). During the flashlight pass, render a billboarded quad with the stencil texture applied. The quad can be resized to modify the beam. Output transparent anywhere the stencil is transparent. The values you put into the stencil and the resulting output do not have to be directly related. As a built-in bonus, UV (0.5, 0.5) corresponds to the center of the quad, so you can use the 2D distance from the center to diminish the alpha near the edges, if desired. If you don't need anything other than circular, you can probably just use that distance instead of a texture.


Diagram:
Both produce similar effects. The cone geometry is outlined in gray. Notice the stencil creates the same cones that geometry can; it's up to you. Stenciling gives you unlimited flexibility in describing the "lens" of the flashlight. With a little modification, inverting the stencil I've shown would produce a beam of "unlight" in an otherwise well-lit world.


stencil light



Multiple lights:
Both methods use rendered geometry, so the draw calls for them can be optimized like any other geometry. If you have 58 cones to render, draw the same unit-cone over and over with 58, per-instance, world matrices, instead. You can render all cones with one call to DrawIndexedInstanced(...).


For many lights, it will be preferable to use "deferred rendering/lighting". Instead of rendering the scene directly to the backbuffer, you render a description of the scene to one or more rendertargets, a lightmap to another 1+ rendertarget(s), and then combine them at the end by drawing a fullscreen quad (2D) and sampling the textures to re-construct, and simultaneously light, the entire scene at once.


The best link I have


Saturday, June 29, 2019

Can you help me find resources for developing a top-down 2D game in Java?



I just started reading about games, and I'm going to develop a game where a person is moving around on a 2D map. My preferred language is Java. Is that suitable to develop games?


I'm going to develop a desktop app and need some help to get started. Can someone please give me some good resources for newbies?



Answer



Personally I am a big fan of jMonkey Engine.Its a shader based & geared toward high end games productions. jGame & Slick2D are very good for 2d games. ligGDX might interest you. All of these are under very active development.


There is no problem in choosing 3d engines for making 2d games. Just ignore that extra D, unless you have some serious issues with that one.


So, I whole heartily suggest you to start with jMonkey Engine, cause its fun, easy to use, has a very active community and comes with an awesome jMonkey Platform which is build on top of Netbeans platform, where every update & features is just one click away. You might end up in using it with many other projects.


There are other libraries too, but they lack community/development activity.



  • Genuts - last update 2004


  • PulpCore - last update 2009

  • GOLDEN T GAME ENGINE (GTGE) - last update 2010

  • Basilisk Game Library - last update 2009


You can write your own engine if want to.Then you should use opengl wrapper library, either LWJGL or JOGL. Lwjgl is more widely used then JOGL.


modding - How do you make Minecraft mods?




I'm new to making minecraft mods and I'm wondering what I need to do to get started.



Answer



Download MCP, aka the Minecraft Coder Pack. This is how all the mods were released, and I've used it a couple of times, albeit for simple mods.


Friday, June 28, 2019

subject vs subject-complement; inversion


Oxford Guide to English Grammar; John Eastwood; Oxford University Press 1994-09


Page 56




We can also sometimes put a complement in front position.


They enjoyed the holiday. Best of all was the constant sunshine.


The scheme has many good points. An advantage is the low cost.


Here the subject (the low cost) is the important information and comes at the end.



How about viewing "an advantage" as the subject while viewing "the low cost" as the subject-complement, and there being no inversion?


How about viewing "best of all" as the subject while viewing "the constant sunshine" as the subject-complement, and there being no inversion?




related: What's the grammatical structure of "all three of them periods"?




algorithm - Help understanding Simplex Noise


Introduction



This is less of a "how to" on using 2D simplex noise and more of a quest to understand what is happening both in the math and visually. I would rather not copy and paste the code I've found. I really would like to understand it.


I have done my homework on the subject and have read through Stefan Gustavson's paper and other sources many times. I feel like I understand about 70% of what my sources are saying, but when I try to manually follow my code loop through to check my understanding I either get hung up on what the variables represent or the math just doesn't add up.


I have begun to rename the variables in the code I've found in order to better understand what's going on. If someone with some knowledge of what is actually happening can cross reference the original code (basically Gustavson's code) with my code that would be most excellent. I'm not actually done with renaming the variables though, partly because I'm stuck.


My Understanding


Let's say I pass pixel (2, 3) into my noise function. At this point I'm working with a normal grid and my goal is to translate this normal grid into a simplex. This simplex is in the form of many triangles since we're working in 2D and supposedly this is better for various reasons.


To translate this point from the normal grid to the simplex grid, I must scale the point along the main diagonal line. After doing that, I apparently don't care about the decimals because I then Floor the results putting it back near the closest previous grid point.


Now I'm going to align the simplex cell to the normal grid by unskewing the simplex cell. Why I do all this work to get to the simplex cell, then undo it I'm pretty hazy on. Seriously, I think the best way for me to understand this is if someone could whiteboard out in steps what the heck is going on here. Super lost.


I didn't walk away empty handed though. I now have what I believe to be the distance between the original grid point I passed in and the simplex point I translated from that original grid point. Woo.


I think I'll stop here for now, just because I don't want this to turn into a huge wall of text. I'm pretty sure the rest of this will click once I understand the the beginning part.


Weird Math



Using the (2, 3) point from earlier, this is what I get by stepping through my own code on paper:


x = 2
y = 3

skewfactor = 1.830
unskewFactor = 1.479
unskewed_x = 1.520
unskewed_y = 2.520
simplexCell_i = 3
simplexCell_j = 4

x_distance0 = -1.520
y_distance0 = -1.520
i1 = 0
j1 = 1
x1 = -1.309
y1 = -2.309
x2 = -2.5207
y2 = -2.5207
ii = no idea why this isn't i2 or something like that. No idea what this is.
jj = same


Not sure if plotting on a graph would work, but I tried and it looks so bad. I used the original parameters (2, 3), simplexCell_i & j, i1, j1, x1, y1, x2, and y2. Doesn't make any sense visually.


Not just visually, but mathematically as well. What's with (2, 3) returning negative numbers that go off the chart? What am I doing wrong here?



Answer



I think it helps to compare it side-by-side with regular Perlin noise. As explained in the Gustavson paper, Perlin noise works by assigning pseudo-random values (gradient vectors) to each corner of a square grid and then doing some interpolation for points in the interior of a grid cell. So the first step in evaluating Perlin noise is to figure out which grid cell you're in. That's what the floor function is doing: all the values between 0 and 1 are in grid cell 0, between 1 and 2 are in grid cell 1, etc. along each axis. So in classic Perlin noise you'll see some code like


int i = int(floor(x));
int j = int(floor(y));

Once you know which grid cell you're in, the rest of the algorithm can proceed.


With simplex noise, as before, the first thing you've got to do is figure out which grid cell you're in. But it's a simplex grid now, which isn't related to the input x and y in such a simple way as the square grid. However, as Perlin noted, and as Gustavson shows in the picture on page 6, if you scale a square grid along its diagonal by just the right factor, the squashed grid cells now have a shape that can be made up of several simplices. (In 2D, for instance, the squashed square is a rhombus that can be made of 2 equilateral triangles. In higher dimensions you'd have more than 2 simplices in each grid cell, but it's the same idea.)



So this provides a way to bridge the gap between the Cartesian coordinate system and the simplex grid. First you figure out which grid cell you're in with respect to the squashed square grid; then you figure out which simplex you're in within that squashed square grid cell.


To figure out which cell of the squashed square grid you're in, you first execute a linear transformation to a coordinate system whose axes line up with the squashed grid. Perlin writes it something like:


float F2 = 0.5*(sqrt(3.0)-1.0);
float s = (xin+yin)*F2;
float xSquashed = x + s;
float ySquashed = y + s;

Here xSquashed, ySquashed are coordinates in the local space of the squashed grid. If you stare at this for a minute, you'll see it's equivalent to the matrix transformation:


[ xSquashed, ySquashed ] = [ xIn, yIn ] * [ 1+F2 F2   ]
[ F2 1+F2 ]


So it's just a linear change of coordinates, although the matrix multiplication is written out in a way that makes that non-obvious.


Then calculating which grid cell you're in is, just like before, simply a floor call on those local coordinates. That gives you the grid cell index for one corner of the simplex. Some more fiddling with the coordinates determines which simplex you're in within the squashed grid cell, and from there you can figure out the grid cell indices for the other corners of the simplex.


Finally, why do you transform back to the regular, non-squashed coordinate system? It's because in the final stage of the algorithm, when you add together the contributions of all the corners, you want to be working with ordinary distance, not the pseudo-distance in the squashed space. That's so the noise interpolates cleanly and doesn't come out with a squashed appearance. Note that what you're transforming back to regular coordinates isn't the original input point, but the first corner of the simplex. Once you've got that, you can find the locations of the other corners and proceed with the rest of the algorithm.


As before, this transformation is written in an odd-looking form but it's equivalent to the matrix transformation:


float G2 = (3.0 - sqrt(3.0))/6.0;
[ X0, Y0 ] = [ i, j ] * [ 1-G2 -G2 ]
[ -G2 1-G2 ]

If you calculate it out, you'll see that this matrix and the one mentioned above are inverses of each other.



word difference - Fall vs Fall down


I can't know the difference between "fall" and "fall down", I saw both definitions in Cambridge and in some dictionaries, but they seem to be the same to me.


See these definitions:



Cambridge Fall - to suddenly go down onto the ground or towards the ground without intending to or by accident.


Cambridge Fall down - to fall to the ground


Cambridge Fall vs Fall down - We can use fall as a noun or a verb. It means ‘suddenly go down onto the ground or towards the ground unintentionally or accidentally’. It can also mean ‘come down from a higher position’. As a verb, it is irregular. Its past form is fell and its -ed form is fallen. Fall does not need an object.


*We can’t use fall down to mean ‘come down from a higher position’:


House prices have fallen a lot this year.


Not: House prices have fallen down a lot …*


What does it mean? Why is it not allowed to use fall down when something comes down from a higher position? If something falls down, according to the Physical laws, the object had to be in a higher position, if not, how did it feel "down"? It doesn't make any sense to me, how can something fall down without being in a high position?


Can anyone explain to me the difference between these two terms, because I read it many times but I still don't get it, as far as I'm concerned, it may be optional.




Thursday, June 27, 2019

c++ - Pointer deleted by object manager on next frame


I have a class named GameManager, which job is to manage GameObject allocation and deallocations. The thing is, every GameObject can interact with each other, so there's a possibility that a pointer to another GameObject in a GameObject's script becomes deleted by manager the next frame. (Something like a missile, the missile's target may be already destroyed by another object)


I can't use shared pointer because there are times where a GameObject should be removed, but can't be removed because a GameObject won't release. What is a good design pattern to tackle this problem? Or should GameObject check for null each time?



Answer




There are two facets to this issue:


First, you may want to defer destruction until the end of the frame (or at least until some time after you know all your inter-object interactions will have been resolved). This is fairly straightforward: when you "destroy" an object, put that object into a "pending kill" set and at the end of the frame, iterate the set, actually destroying every object and then emptying the set.


Second is the issue of outstanding references to dead objects from live ones. This is a much larger design issue for object lifetime and there's a lot of ways you can handle it. You need to decide certain things about how your objects will function to choose a solution.



  • Does a reference to an object never need to keep that object alive?

  • Does a reference to an object always need to keep that object alive?

  • Does a reference to an object sometimes need to keep an object alive? Under what conditions, and are those conditions determined at compile time or at run time?

  • Or you simply need to be able to tell if a referenced object has been destroyed and consequently prevent accessing it?


(When I say "keep alive" I mean doing so simply by virtue of the reference existing. Whether or not you can request destruction of that object via that reference is another thing altogether.)



If a reference never needs to keep an object alive, you are in the ideal world. This probably means that reference is the only reference to the object, though, and the containing objects entirely controls its lifetime. If you are extremely rigid about the way your objects are created and destroyed you can make this work (always ensuring objects which have non-lifetime-preserving references are destroyed before the things they reference are, in and inside-out fashion), but it takes discipline.


If a reference must always keep an object alive, a reference-counting approach is potentially what you want to look at. You will also want to include some way of making non-counting references (so called "weak pointers," like C++'s std::weak_ptr) to break cycles or otherwise create non-counting references. You can also look into solutions involving garbage collection, although in some languages (like C++) this would be quite an intensive option. It's possible though (Unreal does it).


If a reference sometimes needs to keep an object alive and those conditions are determined at compile time, a reference-counting approach can still be used, you just choose a weak reference for this instance. If the conditions are determined at compile time, you'd have to use a more complex reference counting approach where a reference can switch from counting to non-counting, and I'd argue this is a sign you need to make your ownership policies more rigid instead.


If a reference simply needs to be able to determine if a thing is destroyed, you're probably in the second-best scenario. This is where you should try to be, in my opinion (reference counting makes overall determinism related to destruction harder to reason about). There are various handle-based solutions to object management that can work here. The crux of this approach is to keep all your objects in one place along with some metadata that includes a "salt" value, and hand out handles which reference those objects. The handle holds a pointer or index that refers to the actual object, and a salt value of its own. Whenever you actually delete an object, you increment the salt for that object's slot in the object storage. That way allows handles to test their salt against the current salt in storage and if they differ, the handle refers to a dead object.


There are various structures that can be used to implement this, such as innumerable variations on a slot map or related data structures like plf::colony (which can be conceptually adapted to support salted handles instead of iterators or bare pointers).


meaning - How should I use "infer"?


A recent question on Meta discussed advantages and disadvantages of using more advanced words in ELL. As example, this answer was used:



'A Japanese' infers the Japanese person is a thing, and not a person. This is what deems it offensive.


'A Japanese Person' infers the Japanese person is just that - a person, and is therefore considered fine for use.



While the conclusion of the discussion is not related to this question, one comment disturbed me:




Unhappily, both words are misused there! – StoneyB 18 hours ago



Not to bundle two unrelated words and two different errors, let's focus on infers here. What is the misuse in the example? How should that word be used here correctly?



Answer



To infer is to understand or realize a fact that is not immediately obvious. To imply is to "say something without saying it", so to speak; when you imply something, you are indicating it to be true without ever actually saying it outright. You infer what I imply; I infer what you imply.


So, to put it in context, the phrase "A Japanese" may imply something offensive, but you, as a listener or reader, have inferred this.


word usage - Hear "an explosion noise" or "an explosion sound"?


Which word fits more sound or noise in this sentence?



Suddenly, I heard an explosion sound/noise.





python - Scaling window contents in Pyglet?


I'm trying to scale a window's contents so that every pixel displays at a multiple of it's normal size. Basically I want to achieve larger pixels without scaling each and every individual sprite. This question is very similar to this one, however that only scales a single sprite. How should I go about this?



Answer



Ended up figuring out the answer myself through trial and error, solution follows.


You need to import openGL to get access to the scaling function:


from pyglet.gl import *

Next, toss in the following code after your game's window has been initialized:



#These arguments are x, y and z respectively. This scales your window.
glScalef(2.0, 2.0, 2.0)

At this point your resolution will double, but your window will stay the same size. You can correct this easily by doubling your window's width and height. Furthermore, your textures will appear blurry, so we need to fix that. We need to set parameters for the textures in your on_draw() function:


def on_draw(self):
self.clear() #clears the screen
#The following two lines will change how textures are scaled.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)
self.label.draw() #blits the label to the screen


You should now have pixels displaying at double their original size.


adjectives - If both gold and golden refer to "made of gold", how do I choose?


I always thought that if something is made of gold, it is a gold thing, if it looks like gold but might not be, it is golden. But looking in the dictionary, I can see I was wrong.


In the Cambridge dictionary, for both gold and golden it reads:



made of gold, or the colour of gold



For "golden" it reads in also:



made of gold




Example sentences for gold:



She always does her presents up beautifully in gold and silver paper.
She was wearing a gold Lurex top with a pink mini skirt.
There are a couple of fish with blue markings, and a few more with gold stripes down the side.




  1. I understand in those examples "gold" refers to the color but why it is not golden?

  2. How would the meaning change if I put "golden" there?


  3. How to tell which one should I use?




Is Java viable for serious game development?



I have scoured the internet, but there are not very many resources for Java game development, not nearly as many as C++. In fact, most engines are written in C++. I tried to play a game made with jMonkeyEngine, but the game was terribly slow, to the point where my computer froze. I had no other Java applications running, and nothing too resource intensive. In contrast, my computer can play most modern 3D games with ease. If I continue to learn and improve Java now, and it turns out that later I am required to learn C++, making the switch might be difficult.


Is Java an acceptable language for serious game development? By serious, I mean high quality graphics, without much lag on modern computers. I also want to consider making games for consoles.



Answer



Yes it is, check this list for a proof. Those are some games made with Java using The Lightweight Java Game Library (LWJGL). It is a low-level framework, which provides OpenGL for high quality graphics and OpenAL for sounds. It also provides input API. With these you can quite easily get started to serious game development in Java.


I am currently writing my second 3D game as a hobby project in Java, and I just love it. In the past I used to write my games with C++, but after switching to Java there is no going back. Supporting multiple operating systems with Java can be very easy, for example my previous Java game, which I developed in Windows for a year, worked in Linux right away and in OS X with only one bug without any need to compile anything on those platforms.


On the other hand, with Java you have couple of problems.




  1. Garbage collector. As others have stated, non-deterministic memory management is a problem, and you need to code that in mind.

  2. Lack of 3rd party libraries. Most of the available libraries do not support Java. On the other hand you always have the option to call these native libraries from Java also, but it's more work to do so. There are also Java ports or ready-made wrappers available for popular libraries, for example I'm using JBullet - Java port of Bullet Physics Library. On the other hand Java has a huge class library built-in, which reduces the need for third party libraries that are not game related. The lack of libraries has not been a problem for me, but I can imagine that it can be for others.

  3. Java is not supported by popular game consoles and there is no easy switch to those from Java as far as I know. On the other hand Android, which is a popular mobile platform, uses some form of Java. This is an option also, but don't except the same Java code to work both on a PC and Android device.

  4. Smaller community. Most game programmers use C++ and in my experience often dislike Java. Don't expect to get as much help from others. Don't expect to get a job in game development without C++ skills.


Wednesday, June 26, 2019

What makes in-game tutorials effective?


Has there been any research into how to maximize in-game tutorials effectiveness? Any blogs, articles or research papers would be appreciated.




Answer



The comments say it all, but ... no, there probably hasn't been much research on it. But, one thing is for sure: in-game, interactive tutorials work better than "read this" tutorials.


This is easy to see in Flash games; you can see the evolution of static screenshots/images linked from the main menu ("Tutorial" button) into in-game tutorials (or tutorial-like levels).


The benefit, obviously, would be seeing things happen and how actions play out, while the game progresses, instead of reading a bunch of instructions before you even have a good idea about how the game will go.


Use of "Have" in questions "Do you have" or "Have you"


I've seen it on TV that a guy asking another man, "have you a map?"



If I were him, I would probably say, "do you have a map?"


I would like to know what grammatical rules it followed in this case.



Answer



The most common form of the question, in both British and American dialects is "Do you have..."


Using "Have you" is a non-typical use. It sounds old fashioned. For example there is a nursery rhyme which goes:


Baa baa black sheep,
Have you any wool?

There is a similar form "Have you got a map". This is quite common in some British dialects, but is frowned on by some teachers.


When dealing with a static game board, what are some methods to make it more interesting?


Let's say you have a game board that you look at. It does not move but there is some action going on. For example Chess, Checkers, Solitaire. The game I'm working on is not one of these but it's a good reference.


What are some methods you can apply to the game or the design that increases the appeal of the game to the user?


Of course you can make it prettier but what are some other methods you can use?


For example: Visual cues, game design changes, user interface arrangement, etc.





Present Perfect - the meaning


What does the sentence for the native English speaker mean?



  • The dog has stood there for a year.



Does it mean that it is still standing there? Or it means that it has been there for a year and now it is gone, dead or else?




physics - How to obtain "gravity" and "initial impulse" given "desired time to reach max height" and "desired max height"?


I'm developing a 2d fighting game, using time steps based on frames ie, each call to Update() of the game represents a frame so, no variable time at all.


My jump physics code doesn't need to consider delta time, since each Update() corresponds exactly to the next frame step. Here how it looks:


double gravity = 0.78;
double initialImpulse = -17.5;

Vector2 actualPosition;
double actualVerticalVelocity;


InitializeJump() {
actualVerticalVelocity = initialImpulse;
}

Update() {
actualPosition.Y += actualVerticalVelocity;

actualVerticalVelocity += gravity;
}


It works great and results in a smooth arc. However, it's hard to determine the values for "gravity" and "initialImpulse" that will generate the jump that I want.


Do you know if it's possible to calculate the "gravity" and "initialImpulse", if the only known variable is the "desired time to reach max height" (in frames) and "desired max height"?


This question should lead to the answer I'm looking for, but its current answer does not fit my needs.


UPDATE:


As @MickLH figured me out, I was using an inefficient Euler integration. Here is the correct Update() code:


Update() {
actualVerticalVelocity += gravity / 2;

actualPosition.Y += actualVerticalVelocity;


actualVerticalVelocity += gravity / 2;
}

Only the very first step of the Update() will change and will move the object by initialVelocity plus the half of the gravity, instead of the full gravity. Each step after the first will add the full gravity to the current velocity.



Answer



Short answer:


gravity = -2*desiredMaxHeight/(desiredTimeInTicks*desiredTimeInTicks)
impulse = 2*desiredMaxHeight/desiredTimeInTicks




Long answer: (pre-calculus required)


Define your jump function which is a simple integral over time:





Now to find the peak of this function we solve for where the derivative equals zero:




  • simplified:




We have built a system of equations which:



  1. represent the relationship of t (time), h (height), impulse, and gravity

  2. constrain the height to its global maxima



The solution to this system yields the the variables you are interested in:




he was coaxed a safe distance away -- meaning?


Source: Tortoise pursues man in ‘slowest chase ever’



Once Rose was coaxed a safe distance away, the tortoise turned and beat a hasty retreat back to the female. Well, it wandered back as quickly as it could. Presumably, the female was still waiting in the bushes.




I'm not sure how to understand that part, specifically the expression was coaxed away.



Answer



In "coaxed a safe distance away", the phrase "a safe distance away" refers to where Rose went as a result of being coaxed.


English verbs of motion


The pattern that you see in the phrase "coax away" is very common in English verbs that describe motion, especially of Germanic origin. Many languages, like the Romance languages, don't have this pattern (and it doesn't work with many English verbs that come from Latin).


The pattern is: a verb of motion typically indicates the manner of motion but not the path or direction. The path or direction is indicated by a special word following the verb. Many words that work as direction words, though not all, can serve as prepositions in other contexts. Here are some more examples:



Rose ran away. (Rose ran to some place away from where she started.)


Rose walked inside. (Rose walked from outside some enclosure to a point inside the enclosure.)


Rose fled upstairs. (Rose fled from the ground floor and went up the stairs.)



Rose flew north.


Rose turned around.


Rose came home.


Rose wandered back.


Rose jumped ahead.



This works passively, too:



Rose was led away. (Someone led Rose to a place away from where she started.)


Rose was coaxed inside. (Someone gently persuaded Rose to enter some enclosure.)



Rose was driven upstairs. (Someone or something forced Rose to go up the stairs against her will.)



These direction words can form the nucleus of more-complicated phrases. For example:



Rose was coaxed ten feet outside the front door of the bank. (Rose was inside the bank, and then someone coaxed her to come outside the front door of the bank, and continue moving until she was ten feet away from the door.)



And of course, the same constructions work metaphorically:



Rose was coaxed out of her lunch money. (Rose was coaxed to give her lunch money to someone else.)




It usually doesn’t work with verbs from Latin


This pattern of "coaxed away" often doesn't work with verbs of motion that come from Latin:*



Rose entered inside. (This means that Rose entered (something) while she was already inside (some enclosure), not that the act of entering put Rose inside.)


Rose exited downstairs. (This means that Rose exited the building when she was downstairs, not that the act of exiting moved her downstairs. But "moved downstairs" does follow the Germanic pattern.)


"Hey, Rose, ascend inside! Ascend up the ladder!" yelled Johnny from the treehouse. (That doesn't sound like English. The native English verb is climb: "climb inside" or "climb up". With ascend, you use a direct object: "ascend the ladder".)



Notice that these last three verbs denote the path or direction of motion, not its manner.


If you're curious to learn more about this, look up "satellite-framing".





It's not a rule, of course. "Turn" is from Latin (actually Greek via Latin), and partly follows this pattern. "Turn" sort of denotes a path, and sort of denotes a manner of motion.


Tuesday, June 25, 2019

If you have more than one adjective to describe a noun, is there is a specific order you put them?


According to an answer in Quora to the question: What are the most frustrating grammatical errors you see online? there is a specific way to position adjectives based in their type, how accurate is this?



Ordering of Adjectives#3: Ordering of Adjectives


If you have more than one adjective to describe a noun, there is a specific order you put them in, or else the clause will sound very awkward.


That order is: Opinion-size-age-shape-colour-origin-material-purpose-noun.


The most common example of this is "the lovely little old rectangular green French silver whittling knife."


If you put any of those adjectives out of order, the phrase will sound extremely confusing



How confusing will sound to mix that order? I mean, would it just sound a little like a non-native speaker or it will mess the sentence meaning completely?



Doing a little research on the subject it seems that there is not even a consensus for the right order, so I wonder how important is this or under what conditions it may change (American English has the same order of Brithis English?, etc).


Example Order 1: enter image description here


Example Order 2: enter image description here



Answer



The order of adjectives is a good guideline:




  • Quantity or number.

  • Quality or opinion.

  • Size.


  • Age.

  • Shape.

  • Color.

  • Proper adjective (often nationality, other place of origin, or material)

  • Purpose or qualifier.



The challenge to sticking to this order is that adjectives (or adjectival phrases) can modify other adjectives. Consider:



He wore a pair of dirty red shoes.




Are the shoes dirty? Or are they a "dirty red" color? Or this:



He was carrying a pair of old fish bowls.



Bowls for old fish? Or old bowls? A pair of fish or a pair of bowls?


The point is that you have to adjust the order to make your meaning clear.



He wore a dirty pair of red shoes.




Or use commas:



He wore a pair of dirty, red shoes.



(Edit) The link I include is only one such chart, because these are guidelines and not rules, and don't always cover every possibility. Furthermore, the order may vary depending on the words you choose, especially if those words have multiple meanings. Example



I have a pair of Norwegian blue parrots.



Norwegian blue is a (fictional) breed of parrot, and should be classified under "proper adjective", not color or origin. Other adjectives would come before or after, depending:




I have a pair of beautiful dead Norwegian blue talking parrots.



Eventually you get the hang of what sounds right, but it takes practice.


playstation4 - What's the process for making a PS4 game?


Now that Sony has said that devs can self publish for PS4, I'm betting that a lot more people will be interested in producing games for that platform.



What is the process for getting the SDK, documentation, and testing environment for a PS4?



Answer



Thus far, there is no indication that Sony's self-publishing option for the PlayStation 4 is actually an open publishing environment (like the PC). You still need to become a registered developer and that still involves being vetted and approved by Sony, licensing hardware, et cetera.


According to this press release:



For more information about the SCEA Publisher and Developer Relations Group, please visit: http://us.playstation.com/develop or email selfpublish@playstation.sony.com.



You will note that the linked site details a four-step process to register, but notes that, before applying, the following requirements should be met:



  • Form a corporate entity and have a tax ID number.


  • Have a static IP for your company that Sony can whitelist for developer network access.

  • Be physically located in US, Mexico, Central America, South America, or Canada.


The actual online application form asks for various other things an established company should have, as well as information about your development history and published titles, six-month product development plan, et cetera.


present perfect - Should I use PresPerf or Past in a relative clause subordinated to a PresPerf main clause?


a) He has already sent me the book he has read.
or
b) He has already sent me the book he read.


I am still struggling with the differences between present perfect and past simple. I have the amazing Oxford's book 'Practical English Usage' near me and even so I cannot make a decision between the sentences above.


The first part of the sentence(s) is clear:

He has already sent me... - I use the present perfect because the book was recently received by me.


The second part is the problem. It seems that I could use both:
a)...the book he has read - The book was read by him before, until now - because of that could I use the present perfect?
b)...the book he read - The book was read by him a long time ago, during the 90's decade - because of that could I use the past simple?


Shall I use either one or the other? or shall I use whatever tense I want?




Monday, June 24, 2019

prepositions - "looked to the brass eyelet-holes": why "to" and not "at"? What is the meaning?


From Thomas Hardy's Far From the Madding Crowd:



He thoroughly cleaned his silver watch-chain with whiting, put new lacing straps to his boots, looked to the brass eyelet-holes, went to the inmost heart of the plantation for a new walking-stick, and trimmed it vigorously on his way back; took a new handkerchief from the bottom of his clothes-box, put on the light waistcoat patterned all over with sprigs of an elegant flower uniting the beauties of both rose and lily without the defects of either, and used all the hair-oil he possessed upon his usually dry, sandy, and inextricably curly hair, till he had deepened it to a splendidly novel colour, between that of guano and Roman cement, making it stick to his head like mace round a nutmeg, or wet seaweed round a boulder after the ebb.



What is the meaning of the "look to" here? Why is the preposition to used? Does it mean "he looked at them thoroughly to see if they are clean and shiny"?



Answer



If you looked to something in Hardy's day, you tended to it. He presumably cleaned/polished the brass.



grammar - Reducing the adverbial particle and the NP1 and "to be"



Preparing the rats for the dissection session, they came across a queer phenomenon.



I think it was this before the ellipsis:



While they were preparing the rats for the dissection session, they came across a queer phenomenon.




Here the adverbial particle while and the NP1 which is exactly the NP1 of the main sentence and a form of the verb be has been omitted. Could you give me a link which explains this kind of ellipsis and the like?


Another instance:



Obviously sad, she entered the room.



which has been:



While she was obviously sad, she entered the room.




Answer




This isn't an ellipsis. An ellipsis is when there is omission that is implied from context (usually it's a repeated part that is omitted), e.g.


Should I call you, or you me? = Should I call you, or [should] you [call] me?


Your examples on the other hand are phrases that describe the subject.


e.g.



Obviously sad, she entered the room.



Here, "obviously sad" is an adjective phrase that describes her. You can rephrase to something longer, but that's not the same as an ellipsis.


Here's your other example:




Preparing the rats for the dissection session, they came across a queer phenomenon.



Again, "preparing the rats for the dissection session" is a phrase describing them. In this case it's a participle phrase rather than an adjective phrase because it contains a verb (preparing), but it functions the same way: describing the subject of the main clause.


opengl - When is the Z coordinate normalized in GLSL?


I thought that whenever you transform an object to world space, then view space and finally screen space, the last matrix you apply(the projection matrix) normalizes the z values between 0 and 1.


However, I'm getting big z coordinates, which implies that the projection matrix didn't normalize it. Am I doing something wrong? I mean, all I do is:



gl_Position = projection * view * world * gl_Vertex;

Answer



You are missing a few key points.


After the application of the projection matrix, you have a 4-component vector in clip space (not screen space), which is a homogeneous coordinate system in which clipping will be performed (after your vertex shader).


After clipping, the surviving coordinates are divided by the w component to get normalized device coordinates in (-1, 1). A transformation will then be applied to move from NDC space to window coordinates, where the X and Y coordinates are normalized based on the viewport provided to OpenGL and the Z coordinate is normalized based on the depth range, which is ultimately what gives you your (0, 1) range for depth (unless you use glDepthRange to set a different range).


If you want to access this normalized Z value in your vertex shader, you will need to do the computation manually in the shader (based on the information above).


c++ - How to generate portal zones?


I'm developing a portal-based scene manager. Basically all it does is to check the portals against the camera frustum, and render their associated portal zones accordingly. Is there any way my editor can generate portal zones automatically with the user having to set the portals themselves only? For example, the Max Payne 1/2 engine ("Max-FX") only required to set the portal quads, unlike the C4 engine where you also have to explicitly set the portal zones.




Sunday, June 23, 2019

architecture - Best solution for multiplayer realtime Android game



I plan to make multiplayer realtime game for Android (2-8 players), and I consider, which solution for multiplayer organization is the best:





  1. Make server on PC, and client on mobile, all communition go through server ( ClientA -> PC SERVER -> All Clients )




  2. Use bluetooth, I don't used yet, and I don't know is it hard to make multiplayer on bluetooth




  3. Make server on one of devices, and other devices connect ( through network, but I don't know is it hard to resolve problem with devices over NAT ? )





  4. Other solution ?





Answer



Disclaimer; I haven't done much with java and the android platform.


However my more extensive experience with the '.net' languages on the windows mobile platforms, along with the windows platform, is that a good 75-90% of all the code required to create and maintain a Bluetooth or network data connection are maintained/supported with the OS or the libraries that would be need to access the hardware.


So far this also seems true with Android, with the OS exposing methods for creating data connections over Bluetooth or the internet, along with enabling/disabling the respective hardware.


I would imagine that Bluetooth would be the preferred method of connection, as this would be the least expensive to implement (No servers). And allow for a more local gathering/game. Bluetooth is fairly easy as to use. it functions similar to normal network sockets once you know which device you want to connect to.


The others are are correct in that Bluetooth v2.0/v2.1 is not currently capable of support large data loads. This will change with the eventual spread of v3.0 and higher. and there are ways of getting around this limitation.


For now though there is a simple concept, yet complex solution, which you may wish to try. I have been using it for awhile, It is similar to peer to peer, but it involves having the game hosted on all the devices simultaneously. That way if a connection is temporarily lost, slowed, or a player is dropped for any reason, other players will not be affected. This allows users that have been dropped to rejoin the ongoing game with little or no disruption to other players or their own game.



CON: Each player would actually be playing their own somewhat unique instance of the game, that would be linked with the other players to keep the games from straying too far out of sync with each other.


CON: The supporting code can be extensive/complex and difficult to wrap your head around depending on what you want to achieve.


PRO: No central server or device required! No $$$ upkeep required.


PRO: A heavy exchange of data would only occur when a player (re)joined, or a game was initialized. - Even this can be reduced by ensuring that all the games are going to be generated, and progress the same way by all the players. POTENTIALLY reducing energy consumption that occurs due to heavy network usage.


PRO: Data becomes less time sensitive, as the devices would already have all the data they require to keep a game going without the other players. Allowing you to focus more on the actual game experience for the individual users, rather than a group of players.


I have lacked the time to implement a full in-depth game engine that utilizes this. The games I've made have been limited to recreating games similar to Monopoly, and Uno, which seemed to function extremely well.


The easiest was the one that emulated Uno. I essentially stacked the decks of the losers after a player won as to ensure that player won all the games. 95%+ of the time I couldn’t tell that I wasn’t playing the exact same game as everyone else.


I started building a game similar to Master of Orion II, but the game itself was a little much for me to undertake by myself.


graphics - Physically based shading - ambient/indirect lighting


I implemented a physically based path tracer after studying PBRT by M. Pharr and G. Humphreys. Now I'm trying to apply physically based rendering to real time graphics using OpenGL.


I want to start using Oren-Nayar and Cook-Torrance as diffuse and specular BRDF but I have a problem: how do I model indirect lighting?


In a path tracer (like the one contained in PBRT) the indirect/ambient light is given "automatically" from the path tracing algorithm, as it follows the path of light rays taking into account direct and indirect lighting.


How do I model the indirect lighting in a physically based render written in OpenGL, so using real time computer graphics?



Answer



Disclaimer: the following answer was published in its entirity by Nathan Reed an a similar question the asker posted on the Computer Graphics Stack Exchange.


Real-time graphics deploys a variety of approximations to deal with the computational expense of simulating indirect lighting, trading off between runtime performance and lighting fidelity. This is an area of active research, with new techniques appearing every year.


Ambient lighting



At the very simplest end of the range, you can use ambient lighting: a global, omnidirectional light source that applies to every object in the scene, without regard to actual light sources or local visibility. This is not at all accurate, but is extremely cheap, easy for an artist to tweak, and can look okay depending on the scene and the desired visual style.


Common extensions to basic ambient lighting include:



  • Make the ambient color vary directionally, e.g. using spherical harmonics (SH) or a small cubemap, and looking up the color in a shader based on each vertex's or pixel's normal vector. This allows some visual differentiation between surfaces of different orientations, even where no direct light reaches them.

  • Apply ambient occlusion (AO) techniques including pre-computed vertex AO, AO texture maps, AO fields, and screen-space AO (SSAO). These all work by attempting to detect areas such as holes and crevices where indirect light is less likely to bounce into, and darkening the ambient light there.

  • Add an environment cubemap to provide ambient specular reflection. A cubemap with a decent resolution (128² or 256² per face) can be quite convincing for specular on curved, shiny surfaces.


Baked indirect lighting


The next "level", so to speak, of techniques involve baking (pre-computing offline) some representation of the indirect lighting in a scene. The advantage of baking is you can get pretty high-quality results for little real-time computational expense, since all the hard parts are done in the bake. The trade-offs are that the time needed for the bake process harms level designers' iteration rate; more memory and disk space are required to store the precomputed data; the ability to change the lighting in real-time is very limited; and the bake process can only use information from static level geometry, so indirect lighting effects from dynamic objects such as characters will be missed. Still, baked lighting is very widely used in AAA games today.


The bake step can use any desired rendering algorithm including path tracing, radiosity, or using the game engine itself to render out cubemaps (or hemicubes).



The results can be stored in textures (lightmaps) applied to static geometry in the level, and/or they can also be converted to SH and stored in volumetric data structures, such as irradiance volumes (volume textures where each texel stores an SH probe) or tetrahedral meshes. You can then use shaders to look up and interpolate colors from that data structure and apply them to your rendered geometry. The volumetric approach allows baked lighting to be applied to dynamic objects as well as static geometry.


The spatial resolution of the lightmaps etc. will be limited by memory and other practical constraints, so you might supplement the baked lighting with some AO techniques to add high-frequency detail that the baked lighting can't provide, and to respond to dynamic objects (such as darkening the indirect light under a moving character or vehicle).


There's also a technique called precomputed radiance transfer (PRT), which extends baking to handle more dynamic lighting conditions. In PRT, instead of baking the indirect lighting itself, you bake the transfer function from some source of light—usually the sky—to the resultant indirect lighting in the scene. The transfer function is represented as a matrix that transforms from source to destination SH coefficients at each bake sample point. This allows the lighting environment to be changed, and the indirect lighting in the scene will respond plausibly. Far Cry 3 and 4 used this technique to allow a continuous day-night cycle, with indirect lighting varying based on the sky colors at each time of day.


One other point about baking: it may be useful to have separate baked data for diffuse and specular indirect lighting. Cubemaps work much better than SH for specular (since cubemaps can have a lot more angular detail), but they also take up a lot more memory, so you can't afford to place them as densely as SH samples. Parallax correction can be used to somewhat make up for that, by heuristically warping the cubemap to make its reflections feel more grounded to the geometry around it.


Fully real-time techniques


Finally, it's possible to compute fully dynamic indirect lighting on the GPU. It can respond in real-time to arbitrary changes of lighting or geometry. However, again there is a tradeoff between runtime performance, lighting fidelity, and scene size. Some of these techniques need a beefy GPU to work at all, and may only be feasible for limited scene sizes. They also typically support only a single bounce of indirect light.



  • Screen-space global illumination, an extension of SSAO that gathers bounce lighting from nearby pixels on the screen in a post-processing pass.

  • Screen-space raytraced reflection works by ray-marching through the depth buffer in a post-pass. It can provide quite high-quality reflections as long as the reflected objects are on-screen.

  • Instant radiosity works by tracing rays into the scene using the CPU, and placing a point light at each ray hit point, which approximately represents the outgoing reflected light in all directions from that ray. These many lights, known as virtual point lights (VPLs), are then rendered by the GPU in the usual way.


  • Reflective shadow maps (RSMs) are similar to instant radiosity, but the VPLs are generated by rendering the scene from the light's point of view (like a shadow map) and placing a VPL at each pixel of this map.

  • Light propagation volumes consist of 3D grids of SH probes placed throughout the scene. RSMs are rendered and used to "inject" bounce light into the SH probes nearest the reflecting surfaces. Then a flood-fill-like process propagates light from each SH probe to surrounding points in the grid, and the result of this is used to apply lighting to the scene. This technique has been extended to volumetric light scattering as well.

  • Voxel cone tracing works by voxelizing the scene geometry (likely using varying voxel resolutions, finer near the camera and coarser far away), then injecting light from RSMs into the voxel grid. When rendering the main scene, the pixel shader performs a "cone trace"—a ray-march with gradually increasing radius—through the voxel grid to gather incoming light for either diffuse or specular shading.


Most of these techniques are not widely used in games today due to problems scaling up to realistic scene sizes, or other limitations. The exception is screen-space reflection, which is very popular (though it's usually used with cubemaps as a fallback, for regions where the screen-space part fails).


As you can see, real-time indirect lighting is a huge topic and even this (rather long!) answer can only provide a 10,000-foot overview and context for further reading. Which approach is best for you will depend greatly on the details of your particular application, what constraints you're willing to accept, and how much time you have to put into it.


auxiliary verbs - Is "she don't" sometimes considered correct form?


Recently I was exposed to a lot of uses of "She don't + infinitive" (3rd person singular + don't), instead of "she doesn't + infinitive" (3rd person singular + doesn't). I'm not sure if it is a mistake or just accepted usage sometimes.


I found it in a very famous songs such as:


"She don't know me" by Bon Jovi enter image description here


Also in Stan- by Eminem (in 3:04)



But she don't know you like I know you Slim, no one does
She don't know what it was like for people like us growin' up, you gotta call me man
I'll be the biggest fan you'll ever lose

Sincerely yours, Stan, P.S. we should be together too



You can find a lot more by searching in you tube "he don't" or "she don't".




expressions - "Do you like the color red" vs "Do you like the red color"?



When your favorite color is red, do you say,



I like the color red.



or



I like the red color.



Is there any difference of meaning between the two ways of saying about your favorite color?




Object/subject question


Given the sentence:



I gave Tom a cup.



I can say, that



I is the subject.

But what is object here:


"Tom" or "a cup"

?



Answer



Denis, look up this article: "Ditransitive Verb". A ditransitive verb can have two objects, one direct, one indirect. They are also called primary and secondary.


In the sentence




I gave Tom a cup.



Cup is the direct object, and Tom, the indirect.


Saturday, June 22, 2019

grammar - What are the differences between "three years old", "three-year-old" & "3-yr-old"?


I'm not sure which of the following statements is most appropriate in case of a seedling:


Three year old seedling

Three-year-old seedling

3-yr-old seedling

The third one seems to be a nice abbreviation, but I'm not sure if it's a good fit for a plant's description which is going to be used within an article.



Could you please explain what are the differences between them? I'd like to use the third one, is it appropriate for an article about plants (which should sound official)?




american english - Is there any situation where we can use the preposition "in" before a bus?


I always use the preposition on before a bus. But today when I was reading a novel (The Bridge Across Forever) I noticed the writer used the preposition in before a bus so I got confused and landed here to get some help on it.




A still from the novel-


No sooner had I fallen asleep in the bus?



I guess, I know why the writer used the preposition in before the bus but I just want to make sure that my point is correct. I thought, experts could help me in a much better way so I landed here.



Answer



Both prepositions are correct but have slightly different meanings here, depending on how the author considers the bus. The interpretation also depends on context1.


"On the bus" considers the bus functionally as a form of transport.


"In the bus" emphasises that the bus is a place.


So if I read that someone "fell asleep in the bus", my first impression is that the bus is not in use (maybe it is abandoned somewhere, or maybe the character in the novel broke into the bus company's parking lot and got on a bus at night).



If I read instead that someone "fell asleep on that bus", I imagine it to be a bus that is in use as transportation, so the character caught the bus and fell asleep while it travelled to its destination.




(so far as I can think, this use of on is limited to forms of transportation. One can be on a bus, on a ship or on a plane, while actually being inside. As others have pointed out, if you said you were "on that house" you would be standing on the roof.





  1. It is possible to be on a bus that is not in service and vice versa, but that is unusual and requires additional context.


Friday, June 21, 2019

word choice - Pick odd one out from "man, drone, bison, bull"



Part 1



Pick out the word in each of the following that is different (Odd One Out)


Man, drone, bison, bull



The answer is - bison.


PS - I just want to know the reason or the logic used here to pick 'bison' the odd one out.


Meanings:



  1. Man : a human being of either sex; a person.

  2. drone : a continuous low humming sound; a continuous musical note of low pitch.

  3. bison : a humpbacked shaggy-haired wild ox native to North America and Europe.

  4. bull : an uncastrated male bovine animal; push or move powerfully or violently.



Answer




Some other senses of the words than the ones you found are most relevant. Here's what stands out most clearly to me (native AmE):




  1. man: an adult male human.




  2. drone: a male ant, bee, or wasp.




  3. bison: a buffalo.





  4. bull: a male cow.




So, bison stands out as the only one that doesn't mean the male of the species.


The word man also means a human of either sex. English has a number of words like this, where one sense of the word fully encompasses another sense. For example, humans are distinguished from animals, but also humans are a species of animal. A drink is a serving of any beverage, but a drink is also specifically an alcoholic beverage. When people say rectangle, they usually mean a shape different from a square, but in another sense, a square is a rectangle with all four sides equal. Somehow, these ambiguities seldom cause confusion.


One reason this question is pretty easy is because even though man has a generic sense that includes both sexes, it's a secondary sense of the word, in addition to being fairly rare today due to attempts to remove sexism from English. The primary sense has always been "adult male human".


By the way, usually the word for the male of a species also serves as the generic term for both sexes of that species: for example, lion, peacock, boar. The only exception I've ever heard of is cow: cow primarily means the adult female of that species, just as man primarily means the adult male of our species, but cow is also a generic term for any animal of that species regardless of its sex or age. This might be one reason why bull drew me into noticing the male senses of the other words. Also, bull is the name specifically for the male of many species; here is a big list.




By the way, this would be a terrible exam question, because other answers are also reasonable. For example, man is the only human; the others are animals (non-human animals, that is). Luke Sawczak's comment provides other reasonable interpretations, and Peter's answer gives two more. My explanation above tells how the maleness of three of the senses became most salient in the mind of one native speaker.


Awareness of what is most salient about words is crucial for communicating, since that's what guides your listener's attention to what you want them to see or think of. My main point here is that maleness is extremely salient in the words man and bull—the latter especially just after being primed by the word bison to think of animals. That tends to make the male sense of drone more salient, when normally it wouldn't be. This won't work the same way for all fluent speakers, but I'm sure it's very common. Many native speakers, especially if looking at this question casually or quickly, might not even notice that other answers are also just as reasonable.


How to create an extensible rope in Box2D?



Let's say I'm trying to create a ninja lowering himself down a rope, or pulling himself back up, all whilst he might be swinging from side to side or hit by objects. Basically like http://ninja.frozenfractal.com/ but with Box2D instead of hacky JavaScript.


Ideally I would like to use a rope joint in Box2D that allows me to change the length after construction. The standard Box2D RopeJoint doesn't offer that functionality.


I've considered a PulleyJoint, connecting the other end of the "pulley" to an invisible kinematic body that I can control to change the length, but PulleyJoint is more like a rod than a rope: it constrains maximum length, but unlike RopeJoint it constrains the minimum as well.


Re-creating a RopeJoint every frame using a new length is rather inefficient, and I'm not even sure it would work properly in the simulation.


I could create a "chain" of bodies connected by RotationJoints but that is also less efficient, and less robust. I also wouldn't be able to change the length arbitrarily, but only by adding and removing a whole number of links, and it's not obvious how I would connect the remainder without violating existing joints.


This sounds like something that should be straightforward to do. Am I overlooking something?


Update: I don't care whether the rope is "deformable", i.e. whether it actually behaves like a rope, or whether it collides with other geometry. Just a straight rope will do. I can do graphical gimmicks while rendering; they don't need to exist inside the physics engine. I just want something a RopeJoint whose length I can change at will.



Answer



Ok, I naively assumed that LibGDX wrapped all of Box2D, so this would be purely a Box2D problem.


It turns out that vanilla Box2D, at least in trunk, has a function called b2RopeJoint::SetMaxLength. I've added it and got a pull request merged within minutes. It is now available (and working) in LibGDX nightlies.



java - Draw Rectangle To All Dimensions of Image


I have some rudimentary collision code:



public class Collision {
static boolean isColliding = false;
static Rectangle player;
static Rectangle female;

public static void collision(){
Rectangle player = Game.Playerbounds();
Rectangle female = Game.Femalebounds();

if(player.intersects(female)){

isColliding = true;
}else{
isColliding = false;
}
}
}

And this is the rectangle code:


public static Rectangle Playerbounds() {
return(new Rectangle(posX, posY, 25, 25));

}

public static Rectangle Femalebounds() {
return(new Rectangle(femaleX, femaleY, 25, 25));
}

My InputHandling class:


public static void movePlayer(GameContainer gc, int delta){
Input input = gc.getInput();


if(input.isKeyDown(input.KEY_W)){
Game.posY -= walkSpeed * delta;
walkUp = true;

if(Collision.isColliding == true){
Game.posY += walkSpeed * delta;
}
}

if(input.isKeyDown(input.KEY_S)){

Game.posY += walkSpeed * delta;
walkDown = true;

if(Collision.isColliding == true){
Game.posY -= walkSpeed * delta;
}
}

if(input.isKeyDown(input.KEY_D)){
Game.posX += walkSpeed * delta;

walkRight = true;

if(Collision.isColliding == true){
Game.posX -= walkSpeed * delta;
}
}

if(input.isKeyDown(input.KEY_A)){
Game.posX -= walkSpeed * delta;
walkLeft = true;


if(Collision.isColliding == true){
Game.posX += walkSpeed * delta;
}
}
}

The code works partially. Only the right and top side of the images collide. How do I correct the rectangle so it will draw on all sides?



Answer



//This checks overlap in a single dimension -- either x or in y

public boolean checkOverlap(int start0, int end0, int start1, int end1)
{
if(start0 > start1 && start0 < end1)
return true;
if(start1 > start0 && start1 < end0)
return true;
return false;
}

//This does it for two dimensions. If overlapping in X AND Y, you've collided.

//You can call this in your movePlayer handler, but if the "females" (er!) move
//AS WELL, then you must do it per update or you will miss collisions (unless
//you make sure all "females" are moved before the player, on each update).
public bool isColliding(Rectangle rect0, Rectangle rect1)
{
if (checkOverlap(rect0.x, rect0.x + rect0.width, rect1.x, rect1.x + rect1.width) &&
checkOverlap(rect0.y, rect0.y + rect0.height, rect1.y, rect1.y + rect1.height))
{
//resolve your collision. You don't need to store isColliding -- just
//calculate, then act on the result.

}
}

On each game loop update: Get your inputs, move the player, get all inputs for other entities and move them, then at the end, run through the full entities list (player + "females") and run isColliding() on each. This the standard way to do 2D AABB collisions.


Actually if you are running through the full list of n entities and comparing them against n-1 other entities, then for the first entity in the list, call it a, you will have to check against b through n, but on checking b you will not have to check it against a (since a was just checked against b in the prior step) or itself, so you would check from c through n... and so on for each remaining entity. This essentially looks like:


for (int i = 0; i < entities.length; i++) //run through ALL entities
{
Entity e1 = entities[i];
for (int j = i+1; j < entities.length; j++) //run through each entity AFTER e1
{

Entity e2 = entities[j];
if (isColliding(e1.rect, e2.rect))
resolveCollisionFor(e1, e2);
}
}

Do "would you care for some coffee?" and "Do you care for some coffee?" have the same meaning?


I've never really heard the latter before; and, I'm not really sure if it has the same meaning as the former (I'm not even sure if it makes grammatical sense). I'd appreciate it if you'd let me know if they have any difference in terms of their meanings.



Answer



The construction Would you care for [some] X? is a kind of "frozen form" largely restricted to extremely polite / formal contexts. Usually where the speaker (a restaurant waiter, for example) is addressing someone of higher social status). X can be either a noun (something being offered) or an infinitive-based verb clause (an activity being proposed)...



1: Would you care for [some] coffee?
2: Would you care for a cup of tea?
3: Would you care to sit outside?
4: Would you care to follow me?




Note that the first three are all servile / polite offers (speaker will provide you with coffee, tea, or an outside seat if you want it). But #4 is effectively a request - a stylized / polite way of saying Please follow me.


Also note that unless he was being deliberately facetious, if the addressee didn't want coffee, he probably wouldn't reply...



Thank you but no. I don't care for coffee



...because the negated statement I don't care for X is a stylized / dated way of saying I don't like X (ever, I'm not just refusing the current offer). And the non-negated Thank you, yes. I [do] care for coffee simply isn't something native speakers say.




The primary difference between Would / Do you care for X? is we use would in the context of polite / formal offers and requests, where care for means want / like. We only normally use do in contexts where care for means to look after / tend to the needs of (or sometimes to feel deep affection for, but that "romantic" sense is becoming increasingly dated today).


physics - How could I constrain player movement to the surface of a 3D object using Unity?



I'm trying to create an effect similar to that of Mario Galaxy or Geometry Wars 3 where as the player walks around the "planet" gravity seems to adjust and they don't fall off the edge of the object as they would if the gravity was fixed in a single direction.


enter image description here
(source: gameskinny.com)


Geometry Wars 3


I managed to implement something close to what I'm looking for using an approach where the object that should have the gravity attracts other rigid bodies towards it, but by using the built in physics engine for Unity, applying movement with AddForce and the likes, I just couldn't get the movement to feel right. I couldn't get the player to move fast enough without the player starting to fly off the surface of the object and I couldn't find a good balance of applied force and gravity to accommodate for this. My current implementation is an adaptation of what was found here


I feel like the solution would probably still use physics to get the player grounded onto the object if they were to leave the surface, but once the player has been grounded there would be a way to snap the player to the surface and turn off physics and control the player through other means but I'm really not sure.


What kind of approach should I take to snap the player to the surface of objects? Note that the solution should work in 3D space (as opposed to 2D) and should be able to be implemented using the free version of Unity.



Answer



I managed to accomplish what I needed, primarily with the assistance of this blog post for the surface snapping piece of the puzzle and came up with my own ideas for player movement and camera.


Snapping Player to the Surface of an Object



The basic setup consists of a large sphere (the world) and a smaller sphere (the player) both with sphere colliders attached to them.


The bulk of the work being done was in the following two methods:


private void UpdatePlayerTransform(Vector3 movementDirection)
{
RaycastHit hitInfo;

if (GetRaycastDownAtNewPosition(movementDirection, out hitInfo))
{
Quaternion targetRotation = Quaternion.FromToRotation(Vector3.up, hitInfo.normal);
Quaternion finalRotation = Quaternion.RotateTowards(transform.rotation, targetRotation, float.PositiveInfinity);


transform.rotation = finalRotation;
transform.position = hitInfo.point + hitInfo.normal * .5f;
}
}

private bool GetRaycastDownAtNewPosition(Vector3 movementDirection, out RaycastHit hitInfo)
{
Vector3 newPosition = transform.position;
Ray ray = new Ray(transform.position + movementDirection * Speed, -transform.up);


if (Physics.Raycast(ray, out hitInfo, float.PositiveInfinity, WorldLayerMask))
{
return true;
}

return false;
}

The Vector3 movementDirection parameter is just as it sounds, the direction we are going to be moving our player in this frame, and calculating that vector, while ended up relatively simple in this example, was a bit tricky for me to figure out at first. More on that later, but just keep in mind that it's a normalized vector in the direction the player is moving this frame.



Stepping through, the first thing we do is check if a ray, originating at the hypothetical future position directed towards the players down vector (-transform.up) hits the world using WorldLayerMask which is a public LayerMask property of the script. If you want more complex collisions or multiple layers you will have to build your own layer mask. If the raycast successfully hits something the hitInfo is used to retrieve the normal and hit point to calculate the new position and rotation of the player which should be right on the object. Offsetting the player's position may be required depending on size and origin of the player object in question.


Finally, this has really only been tested and likely only works well on simple objects such as spheres. As the blog post I based my solution off of suggests, you will likely want to perform multiple raycasts and average them for your position and rotation to get a much nicer transition when moving over more complex terrain. There may also be other pitfalls I've not thought of at this point.


Camera and Movement


Once the player was sticking to the surface of the object the next task to tackle was movement. I had originally started out with movement relative to the player but I started running into issues at the poles of the sphere where directions suddenly changed making my player rapidly change direction over and over not letting me ever pass the poles. What I wound up doing was making my players movement relative to the camera.


What worked well for my needs was to have a camera that strictly followed the player based solely on the players position. As a result, even though the camera was technically rotating, pressing up always moved the player towards the top of the screen, down towards the bottom, and so on with left and right.


To do this, the following was executed on the camera where the target object was the player:


private void FixedUpdate()
{
// Calculate and set camera position
Vector3 desiredPosition = this.target.TransformPoint(0, this.height, -this.distance);

this.transform.position = Vector3.Lerp(this.transform.position, desiredPosition, Time.deltaTime * this.damping);

// Calculate and set camera rotation
Quaternion desiredRotation = Quaternion.LookRotation(this.target.position - this.transform.position, this.target.up);
this.transform.rotation = Quaternion.Slerp(this.transform.rotation, desiredRotation, Time.deltaTime * this.rotationDamping);
}

Finally, to move the player, we leveraged the transform of the main camera so that with our controls up moves up, down moves down, etc. And it is here we call UpdatePlayerTransform which will get our position snapped to the world object.


void Update () 
{

Vector3 movementDirection = Vector3.zero;
if (Input.GetAxisRaw("Vertical") > 0)
{
movementDirection += cameraTransform.up;
}
else if (Input.GetAxisRaw("Vertical") < 0)
{
movementDirection += -cameraTransform.up;
}


if (Input.GetAxisRaw("Horizontal") > 0)
{
movementDirection += cameraTransform.right;
}
else if (Input.GetAxisRaw("Horizontal") < 0)
{
movementDirection += -cameraTransform.right;
}

movementDirection.Normalize();


UpdatePlayerTransform(movementDirection);
}

To implement a more interesting camera but the controls to be about the same as what we have here you could easily implement a camera that isn't rendered or just another dummy object to base movement off of and then use the more interesting camera to render what you want the game to look like. This will allow nice camera transitions as you go around objects without breaking the controls.


Simple past, Present perfect Past perfect

Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...