Thursday, March 31, 2016

javascript - Approaches for storing grid-like information



I am drawing this simple grid on my NodeJS server:


var grid = [];

for(var x = 0; x < 20; x++){
grid[x] = [];
for(var y = 0; y < 20; y++){
grid[x][y] = 0;

}
}


console.log(grid);

The outcome looks like this:


enter image description here


I know, pretty right?!


0 is supposed to indicate FREE, thus if a player requests to move to a field with 0, he will!


The problems come when I want to add more then just Free/Occupied, for instance I would like to give each Array Element an ID number for the Client Updates, or certain features on the field to be stored.


I tried to assign { something: N, something2: N}


But thought it looked rather performance expensive in the long run. (On a big Grid)



I read about using a single Object for several elements, but I cannot find this any more..


Should I perhaps use an Array of Objects, or an Additional Array inside each X,Y element?


Any performance / convenience / anything goes tips are welcome :D


Edit: Thank you for the ideas so far, I was now thinking to perhaps using Strings. Storing Variable_A + "," + Variable_B and then later using the .split to use the information. Any thoughts on this?



Answer



You seem to match this frequent use case :



  • a big map with each cell having a few characteristics

  • some objects being somewhere in the map



I suppose the map is big because you ask about performances. If the map is small, then both the question and my answer seem less relevant.


I usually use



  • a grid containing an int for each cell, this int being able to carry up to 32 flags (using bitwise operation)

  • a list of objects (may be an array or database records), each one having an x and an y


Not referencing the objects from the grid avoid coherency/race problems and is lighter.


A frequent practice is also to use a monodimensional array and to address the cells as a[x+W*y];. It makes many operation (like the cloning of the whole map or any range operation independent of the position) easier and faster. I'm not really familiar with node.js but I suppose the Buffer class should be used.


As I suppose you'll want to use a browser as final interface, here's a trick I use : I encode my maps as PNG, with each color of the palette simply being one of the possible flag combinations. This makes




  • storage and transmission very efficient (PNG is compressed)

  • map operations easy with image based operations

  • fast rendering at low resolution easy in the browser using image rendering


subject verb agreement - A sentence from Swan's PEU using plural 'are' and not singular 'is'


In Swan's PEU (3rd Edition), an entry number 157 of 'discourse markers' reads...



There are a very large number of these 'discourse markers', and it is impossible to give a complete list in a few pages.



I'm finding it difficult to understand. Why is there a plural verb are and not is? I also note that discourse markers are defined in quotes ('...') which means it's considered as a single entity.




There are a very large number of 'X' out there does not look better to
There is a very large number of 'X' out there



Let me quote a quote here



Despite the digital age, there is a very large number of venues and spaces that are looking for plays, and many of them are looking for new plays.



Also, a result from the Google Books



What makes the fields 'classical' is the fact that there is very large number of sources contributing.




What am I missing?



Answer



CGEL (The Cambridge Grammar of the English Language) discusses (at 3.3, p.349) situations where quantification is “expressed by means of a noun as head with an of PP as complement”. In these constructions terms like a lot, lots, a great deal, plenty, oodles and a number are in syntactic form the heads of the NPs in which they occur, but the semantic head is the ‘oblique’, the term which is the object of of:



a. Lots of workoblique is left to be done.
b. A lot of peopleoblique were present.



Note that in a lots is plural, but the entire NP takes a singular verb, is, because its oblique, work, is non-count, while in b a lot is singular but the entire NP takes a plural verb, were, because its oblique, people, is plural-only.


In these cases, says CGEL, “lot is number-transparent in that it allows the number of the oblique to percolate up to determine the number of the whole NP.”



In other words, semantics trumps syntax with these “number transparent quantificational nouns”: the verb agrees with the sense rather than the form.


In your sentence, consequently, the verb correctly takes the plural because its real subject is the oblique these 'discourse markers', not a very large number.


With many singular number-transparent nouns such as number, however, many people feel uncomfortable with a violation of syntactic concord. Consequently, you are just as likely to encounter a 'number-opaque' treatment. In the circumstances, neither can legitimately be labeled “incorrect”.


Note, by the way, that the phrase a very large number acting as a NP on its own, not quantifying an oblique, will take a singular verb:



A small number is expressed in words; a large number is expressed in digits; a very large number is expressed in exponential notation.



grammar - "I am home" Really?


What is the reason behind some words not having some prepositions and are assumed?


For example:



I am home




is a perfectly valid sentence meaning, one has reached home, but if someone completely noob to English come across this sentence, they will probably think:



"I am home", how can a person be a home?



Also, see these sentences:



I go home


Please come here


Go there and purchase your tickets




As you can see, some words like here, there, home don't accept "to" before them.



I go to home



is wrong I suppose.


So, why do certain words don't take "to" infinitive or prepositions before them?



Answer



You should notice that in all the sentences you wrote and assumed that the preposition is missing, the prepositions are not missing. This is in accordance with the grammar rule. You see before home, here, there, prepositions are missing, because home, here and there are adverbs. And therefore we don't need a preposition before an adverb.


You can also check this link



Hope this helps.


difference - Expression to differentiate between listening problem and understanding problem?


If during conversation, I face problem in understanding what the other speaker is saying, I may come with following polite statements:




"Sorry...?"


"Sorry, could not get you..."


"Pardon me..."


"Come again please..."



But my inability to understand can be due to two reasons. 1) "I could not hear what you said" and 2) "I could not understand what you said."


But as far as I know, the aforementioned statements can't help differentiating between the situations. So my question is Is there any polite expression out there which can explicitly tell for what reason I could not understand you?


If such expression/phrase exists, the speaker can understand the reason and take necessary step accordingly i.e. either elevate his voice or rephrase what he is saying.



Answer



The aforementioned statements exactly differentiate the situations.

If you can't hear something, that means the sound waves were not entering your ear canal at sufficient amplitudes for you to translate into speech patterns.


IF you couldn't understand something it's because your brain could not make sense of the sounds it received.


It is true that sometimes people say they couldn't understand something because they couldn't hear it, but if you can help it, you should avoid that and only use understand when you mean understand and use hear when you mean hear.


As far as politeness goes, the usual response is to add extra words to make it clear what you are asking for.


"I'm sorry but I couldn't hear you, would you please repeat that?"


"I'm sorry but I'm unfamiliar with hermeticity would you please explain that?"


Wednesday, March 30, 2016

indefinite article - Some words starting with vowels, preceded by 'an' instead or 'a'



We've all been taught in primary school how we're supposed to use 'an' instead of 'a' when we talk about an object whose name starts with a vowel, in its singular form.



-> An Apple
-> An Ostrich
-> An Elephant





But when we refer to one university, universe or even adjectives like useless, we use 'a' instead of 'an'.



-> A University
-> A Useless Boy
-> A Unique sight




Why is that so ? I get the fact that we use 'an' for words which start with consonants occasionally, for words like 'Hour','Honor', because the 'H' is silent. But what of the exceptions in the case of words starting with A,E,I,O,U ? Or is the just for certain words that start with 'U' ?



Answer



OALD on 'vowel':




a letter that represents a vowel sound. In English the vowels are a, e, i, o and u.



This is clear.


But then, the articles 'a' or 'an' depends on 'how do we pronounce' the words.


Now, your concern of why do those words take an indefinite article 'a' and not 'an'.


Check this:



Use 'A' before words such as "European" or "university" which sound like they start with a consonant even if the first letter is a vowel. Also use 'A' before letters and numbers which sound like they begin with a consonant, such as "U", "J", "1" or "9". Remember, it is the sound not the spelling which is important. For example, "1" is spelled O-N-E; however, it is pronounced "won" like it starts with a "W".




Examples:



She has a euro (sounds like "yu-ro")
That number is a "1" (sounds like "won").



So, because of their consonant sound we use 'a' instead of 'an'.




University is 'यूनिवर्सिटी' and there you have 'यू' to pronounce. It is not pronounced with vowel a(अ), e(ए), i(ई,आई), o(ओ), or u(अ).


English word for "repetitive, boring work"?


Is there an English word for "work which is repetitive (and often boring), but which must be done"?



Answer



Drudgery is routine or dull work like domestic drudgery. Donkey work is an informal (British) synonym.




"The donkey work had been done - the intricate brushing away of thousands of years of dust and soil was a task undertaken by the steady handed experts."



meaning - Differences among "It feels...", "It looks...", and "It seems..."


There is a topic about the differences among "it seems" and "it likes": What is the difference between "it seems" and "it looks like"?


But what about the difference of the "it feels" with the other two ?



  • "It feels" x "It looks" x "It seems"



Answer





"It looks"
"It seems"
. The verb "to seem" is actually the passive of the verb "to see", but has gone beyond sight in use. Both "looks" and "seems" can refer to how something is seen.


The book seems green.
The book looks green.



"Seems" and "feels" can involve touching and imaginings beyond senses:



This cloth seems rough.

This cloth feels rough.
This situation feels dangerous to me.
This situation seems dangerous to me.



"Seems" can refer to hearing,



The note seems flat.
The note sounds flat.



"Seems" can refer to taste:




The tomato seems salty.
The tomato tastes salty.



If there is any doubt as to which, "seems" , "looks" , or "feels" to use, use "seems"; it can serve more meanings than the others.


software engineering - What was the typical toolchain for DOS game development?



I'm wondering how people used to write DOS games (like Doom), I can't find much on this, but would love to learn more about the earlier days of game development.


What language was used predominantly?


I presume it was C. Or C++ already?


What IDEs (or editors/compilers) were popular?



Microsoft Visual C/C++ (or Microsoft C/C++ as I believe it used to be called) didn't exist back then AFAIK. So what did people use? edit and a command-line compiler from Intel or something?


What APIs where used?


What was common for 2D games? What about 3D games like Doom and Tomb Raider?


Anything else that's different from today?


I'd be happy to hear any other differences, like what image/audio formats were used.



Answer



Language: C was predominant, but C++ was around and used.


Dev tools: Development environments included those from Borland and Watcom (almost unheard of today) among others. Both Borland and Watcom had their own compilers and their own IDEs. Borland was by far the most popular in general, though Watcom had a reputation for producing faster compiled programs, iirc.


APIs: Few APIs existed or were used. Video programming often consisted of directly writing pixels to the VGA framebuffer. Even 3D game were software rasterized. The Miles sound API was used for audio, which included drivers internally as the OS didn't have it's own audio framework or drivers. Keyboard and mouse input were generally read straight from the system. There were a couple popular memory extenders for 32-bit mode which were very popular and necessary towards the end of DOS's reign. The hardware was simple, thankfully, but it was definitely a pain in the butt writing games that worked on a variety of hardware. Libraries to deal with all the simple low-level stuff (like SDL, SFML, GLFW, etc.) did not exist, and porting was a huge amount of the work in releasing a game due to all the different platforms and hardware popular at the time (though its getting equally bad these days, what with all the consoles, handhelds, and mobile devices, plus Windows and the fringe desktop OSes like OSX and Linux).


On a side note to the previous point, Doom was not 3D the way we know it today. That is, it imposed huge limitations on 3D environments due to its highly specialized software rasterization algorithm, and characters and items were all jut sprites.



File Formats: Asset formats were just as proprietary to the engine then as they are now. I vaguely recall Bink being around back then for video (which was very rare, generally only in opening and closing sequences), and I think Creative had some specialized sound formats. I'm unsure what source or intermediary formats were popular for sound or video back then, but TGA was pretty popular for images.


opengl - 3D Camera Rotation


Please, forgive me, but I need help and I've been stuck on this for a few weeks now, I'm making no progress and everywhere I go and I see a different answer, everything I try doesn't work. I've had enough tips and advice, now I really just need someone to give me the answer for me to work backwards from because I can't understand this.



What has made this subject most confusing is the way everyone uses a different set of conventions or rules, and their answers are based on their own conventions without defining what they are.


So here is the set of conventions I've formed based on what seems to be most common and logical:



  1. Right Hand Rule for axis.

  2. Positive Y is up, Positive Z is towards the viewer, Positive X is to the right.

  3. Row Major matrixes, transposed when sent to shaders.



    • Pitch: rotation about the X axis

    • Yaw: rotation about the y axis


    • Roll: rotation about the z axis



  4. Rotation order: Roll, Pitch, Yaw (is this correct? can someone check me on this?)

  5. Positive rotation values, looking down from positive end of an axis, results in clockwise rotation.

  6. Default direction for 0 rotation across all axis is a vector pointing down to negative Y.


.. given those conventions (by all means correct me if they are wrong!), how does one:



  • Write a LookAt function? (lookAt(vector position, vector eyefocus, vector up))


  • Calculate a rotation matrix. (rotation(x, y, z))


I've tried answering these two questions myself at least over the past 3 weeks, I've re-written my LookAt & Rotation Matrix function at least 30 times, I've tested dozens of methods and read through material I've seen on hundreds of websites and read many answered questions, copied other people's code, and nothing I've made so far has worked, everything has produced the wrong result. Some of which have produced some hilariously bizarre outputs not even close to correct rotation.


I've been working on this every night with the exception of last night because I was getting so frustrated with the repeated failure that I had to stop and take a break.


Please, just show me what the correct method is so I can work backwards from that and figure out how it works, I'm just not getting the correct answer and this is driving me a little crazy!


I'm writing in Java but I'll take code written in any language, most of my 3D rendering code is actually working quite brilliantly, it's just the maths I can't understand.


UPDATE: SOLVED


Thankyou for your help! I now have a working LookAt function that I actually understand and I couldn't be happier (if anyone would like to see it by all means ask).


I did try again at creating a rotation matrix based off pitch/yaw/roll variables and it again seemed to fail, but I've decided to dump attempting to use euler angles for the freelook camera as it seems to be ill-suited for the role, instead I'm going to create a quaternion class, might have better luck going down that path, otherwise I'll resort to using the pitch/yaw as spherical coordinates and rely on the new working LookAt function for rotation.


If anyone else is facing a similar problem and wants to ask me questions, feel free to.



At least I'm not stuck anymore, thanks for the help!



Answer



What you are looking for can be found in this very good explanation: http://www.songho.ca/opengl/gl_transform.html


But since I found it sort of confusing without hand holding I will try to explain it here.


At this point you need to consider 5 coordinate systems and how they relate to each other. These are the window coordinates, the normalized device coordinates, the eye coordinates, the world coordinates and the object coordinates.


The window coordinates can be seen as the "physical" pixels on your screen. They are the coordinates that the windowing system refers to and if you operate in your monitors native resolution, these are actually individual pixels. The window coordinate system are 2D integers and is relative to your window. Here the x+ is left and y+ is down with the origin at the top left corner. You encounter these when you for example call glViewport.


The second set are the normalized device coordinates. These refer to the space setup by the active view port. The visible area of the view port goes from -1 to +1 and thus has the origin in the center. The x+ is left and the y+ is up. You also have the z+ is "out" of the scene. This is what you describe in 1.


You have no control how you get from the normalized device coordinates to the window coordinates, this is done implicitly for you. The only control you have is through glViewport or similar.


When working with openGL, your final result will always be in normalized device coordinates. As a result you need to worry how to get your scene rendered in these. If you set the projection and model-view matrix to the identity matrix you can directly draw in these coordinates. This is for example done when applying full screen effects.


The next is the eye coordinates. This is the world as seen from the camera. As a result the origin is in the camera and the same axis aliments like the device coordinates apply.



To get from the eye coordinates to the device coordinates you build the projection matrix. The simplest is the orthographic projection that just scales the values appropriately. The perspective projection is more complicated and involves simulation perspective.


Finally you have the world coordinate system. This is the coordinate system in which your world is defined and your camera is part of this world. Here it is important to note that the axis orientations is just as you define them. If you prefer z+ as up, that is totally fine.


To get from world coordinates to eye coordinates you define the view matrix. This can be done with something like lookAt. What this matrix does is "move" the world so that the camera is at the origin and looking down the z- axis.


To compute the view matrix is surprisingly simple, you need to unto the camera's transformation. You basically need to formulate the following matrix:


$$ M = \begin{matrix} x[1] & y[1] & z[1] & -p[1] \\ x[2] & y[2] & z[2] & -p[2] \\ x[3] & y[3] & z[3] & -p[3] \\ 0 & 0 & 0 & 1 \end{matrix} $$


The x, y and z vectors can be directly takes from the camera. In the case from look at you would derive them from the target, eye(center) and up values. Like so:


$$ z = normalize(eye - target) \\ x = normalize(up \times z) \\ y = z \cdot x $$


But if you happen to have these values just lying around you can just take them as they are.


Getting p is a bit more tricky. It is not the position in world coordinates but the position in camera coordinates. A simple workaround here is to initialize two matrixes, one with only x, y and z and a second one with -eye and multiply them together. The result is the view matrix.


For how this may look in code:



mat4 lookat(vec3 eye, vec3 target, vec3 up)
{
vec3 zaxis = normalize(eye - target);
vec3 xaxis = normalize(cross(up, zaxis));
vec3 yaxis = cross(zaxis, xaxis);

mat4 orientation(
xaxis[0], yaxis[0], zaxis[0], 0,
xaxis[1], yaxis[1], zaxis[1], 0,
xaxis[2], yaxis[2], zaxis[2], 0,

0, 0, 0, 1);

mat4 translation(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
-eye[0], -eye[1], -eye[2], 1);

return orientation * translation;
}


full code


And finally for the sake of completeness, you also have the object coordinate system. This is the coordinate system in which meshes are stored. With the help of the model matrix the mesh coordinates are converted into the world coordinate system. In practice the model and view matrices are combined into the so called model-view matrix.


c++ - Fast Updating of Large Amounts of OpenGL data


I'm seeking advice, suggestions, and ideas on how to handle the updating of large amounts of data in OpenGL and c++.


My partner and I have gone through two methods.


The first is vertex by vertex rendering. Right away, this was crossed off the table; it's super slow! When using it, the simulation ran at 3 FPS. All it was rendering was just some untextured cubes composing a flat landscape. Imagine adding some animals to that, or even a full blown world! This method is definitely not an option.


The second method is VBOs. The problem with VBOs is that they don't like having vertice added or removed (though removing can be accomplished with a hack of setting data to null). We could just create a new VBO every time something changes in this way, but the 3D data is likely to change often, if not every frame. Thus that would also be very inefficient, and VBOs are not an option either since, we can't add vertices/faces and can only dirty-hack remove existing vertices/faces.


What, then, would be the best method for us to use? Would we be able to use a shader or something to create a custom datatype on the graphics card, upload the 3D data to it initially, and then afterwards, only send the transformation details (such as a transformation matrix or what have you) and what faces/vertice have been removed/added? Even if the above method is a possibility, how else could I go about doing this, and what is its pros and cons? Perhaps one of those would better serve my needs.


Edit: Here is what I mean by "large amounts of data". It shall be known that the project that requires this fast updating of "large amounts" of OpenGL data is a simulator that attempts to simulate its own world. World being defined as an entire existence, which may include multiple universes, dimensions, galaxies, solar systems, planets, etc, such that it is not limited to a single "planet", which is a common definition of the term "world". Thus, this "large amount of data" is the portion of the world that is visible to the player.


As this is a world simulator, all the data of the world is procedurally generated. This means 3D models are NOT being loaded. The "3d models", if you wish to call them that, are generated from the 3D data of the objects in the world, such as a house, tree, car, flower, cat, or cup.


In short, the world simulation aims to simulate the world as detailed as possible (the limit being set by the resources of the computer running it). For the purpose of this question, let us assume that the simulation is being run on a computer that allows for every object to possess very high polygon counts which we will say, for the purpose of this question, exceeds 100k for a cat.


That in itself is not a problem; just initially place the data in a VBO. However, what happens as the cat grows? New vertices and faces would be added and removed. The case extends to actions the cat does such as walking around the world (Animations are not explicitly defined and used in the code. They are, rather, a byproduct of the simulation, in which the 3D data itself is updated, creating the effect of an animation). That is where the problem lays. Creating a new VBO for every frame of the 'animation' (keep in mind the above statement on animations), or for every time the cat grows, is not an option due to performance concerns. It'd more or less achieve the same results as vertices by vertices rendering, but with more overhead, and the only performance gains seen being when no new vertices and faces are added or removed.



In conclusion, large amounts of data, such as 100k polygons per model, will be seen. The specific edits that will be done to this data results from the growing, or otherwise shrinking, of objects in the world and the simulation of these objects (See the note I made earlier on animations.)


Edit2: To clarify confusion, this is how the data is changed:
1. The position of each vertices is updated relative to the camera. Eg, a dog walking away from the camera.
2. Vertices are added and removed from the model. Thus faces are also changed, removed, or created.

Changes #1 and #2 are expected to occur at least once a second for every object in the world, excluding terrain, rocks and other inanimate objects.
I do not see how patterns of changes can be predicted, as the data can change in unimaginable ways.
Examples of changes that will occur:
1. The simplest case, an apple falling from the sky. This is easily be done by simply applying a transformation matrice. The rest that follow are more complex.
2. A sword lopping off the arm of a soldier. In this example, vertices are removed from one model and put into a new model. VBOs do not explicitly support this, but a dirty hack exists to work around it.
3. An engineer welding two pieces of metal together. In this example, vertices are removed from one model, and added to the next. VBOs do not support this. In this specific case, that problem can be worked around by simply retaining the individual VBOs and only do the merger of data in the world simulation code.
4. A chemical reaction. Such examples of this would be an acid eating away a material or mixing liquid detergent and hydrogen peroxide reference video)

5. A physical reaction, such as ice melting into water.
6. Liquids. The most complex case, as their surface dynamically changes in countless ways.


Edit 3:
A wave of inspiration swept over me and I've come up with two ideas on how to solve this problem. I'm now working with my partner towards implementing both. In short, one method creates a custom data structure similar to VBOs, that has the in-built feature of adding and removing vertices, while the other is a hack using VBOs, destroying the old one whenever vertices are added or removed and recreating it, but doing it on the GPU so that data is not transferred all the time, which is the source of the nightmarish slow performance of vertex by vertex rendering. More details to come.




Tuesday, March 29, 2016

prepositions - at vs in (the hospital) - What is different?



I saw your mom in the hospital.
I saw your mom at the hospital.



What is different in these two sentences? Do two prepositions make significant difference?




meaning in context - What does (R-TN) after a name mean?


An example:



US Rep. Marsha Blackburn (R-TN) wants to make sure the Federal Communications Commission never interferes with "states' rights" to protect private Internet service providers from having to compete against municipal broadband networks.



From: http://arstechnica.com/business/2014/07/congresswoman-defends-states-rights-to-protect-isps-from-muni-competition


Looks like some kind of an abbreviation to me.



Answer




It means that the person is a Republican member of Congress from the state of Tennessee.


The "(X-YY)" convention is widely used in the news media to refer to current and former members of the U.S. House of Representatives and Senate, with X denoting the person's political party (usually R for Republican or D for Democratic) and YY denoting the state he or she represents. (See this page for the official list of two-letter state abbreviations.)


Subject of imperatives starting with 'let'?



[Source 1:] When a pronoun follows "let" in a mild exhortation, we use the object form of the pronoun.


We say "Let us go then," but we're apt to slip in the subject form, especially when the pronouns are compounded:

✘ "And now, let you and I take the first step toward reconciliation." ✘
It should read "let you and me … "
And in the Biblical admonition, we read ✘ "Let he who is without guilt cast the first stone." ✘
It should read "Let him who is without guilt cast the first stone."


[Source 2:] First person plural imperatives (cohortatives) are used mainly for suggesting an action to be performed together by the speaker and the addressee (and possibly other people): "Let's go to Barbados this year"; "Let us pray". Third person imperatives (jussives) are used to suggest or order that a third party or parties be permitted or made to do something: "Let them eat cake"; "Let him be executed".



1. What are the reasons behind the first sentence of this post (that I italicised)?


2. 1 implies that the object pronouns are all direct objects in these imperative clauses, so what or who is the subject in these imperative clauses?



Answer



The primary meaning



These two sentences illustrate the primary meaning of "let", of which all others are variations and extensions. You should start by memorizing these sentences:



Let down your hair! (As in the fairy tale of Rapunzel.)


Let me go!



"Let" primarily means to refrain from preventing. The first sentence tells "you" to remove whatever is binding her hair so that it falls or sways freely. The second sentence tells "you" to let the speaker ("me") out of their physical grasp, out of prison, out of slavery—to release the speaker from whatever restraint "you" controls. Similarly, "Let him out!" means "Stop preventing him from exiting." An electrical "outlet" is an opening in a wall where the electricity is allowed to come out; elsewhere, the wall and insulation block the flow of electricity.


These two sentences are addressed to "you", which is omitted, as it is in most commands. People sometimes say "You let me go!" The person addressed as "you" has power or authority to prevent what the speaker is commanding. So, the basic construction is clearly in the second person, with "you" as subject.


The object pronoun and infinitive verb are an old construction that is mostly lost in English, but "make" and "help" follow the same pattern. For example, "Make him walk," "Help him walk," "Let him walk." Verbs that take a direct object that is the subject for an infinitive verb are very common in English, but most of them use "to": "Tell him to walk", "Teach him to walk", "Prepare him to walk".


A secondary meaning


Here's the next most fundamental pattern (and you should memorize this exact sentence):




Let's go!



This is an exhortation for "you" to get started doing some activity together with the speaker. The specific activity is indicated by context. The most fundamental activity for this sentence is literally going: exiting the current location together, or starting to run or ride together. You might say it while sitting on a rollercoaster while waiting for it to get started.


When I was a little kid, I had to ask why "let's" is spelled with an apostrophe. I had no idea that it was short for "let us" until it was explained to me. I didn't believe it at first. But that explains what's happening here. The "Let me go" pattern is applied to a situation where the other person isn't exactly preventing you from doing something. But, you want to do an activity together, so you need their cooperation. And so, English has extended the sentence pattern for demanding that "you" release the speaker into a statement that sort of means "Hey you, release the blockage that is preventing us from going!" No one literally thinks of it this way, because "Let's go!" has become a stock phrase of its own, but it still strongly echoes the primary meaning of "release" as well as the structure of "Let me go!". Out of any specific context, "Let's go!" has a strong connotation of feeling free and unconstrained. People often say "Let's go!" when they're in a situation where flow or freedom of action is blocked, such as clogged traffic.


One could argue that "Let's go" is a first-person plural imperative, in which the subject is "we", since that's obviously the meaning: the speaker wants to go together with "you" (who could be more than one person). I don't recommend arguing or thinking that way, though, because then you won't follow the echoes and connotations that guide all the other common ways of stretching "let". As you will see below, the "negated negation" of "let" gives it both flexibility and conciseness for making a wide variety of exhortations, some of them fairly complex and subtle. If you replace that with grammar jargon about first-person and third-person even though the statements are worded as second-person commands, you won't see why "let" gets extended to cover all these meanings.


Stretching it further



Let them eat cake!


Let Kathy solve her own problems.




These two sentences announce that the speaker is refusing to act or take responsibility in the situation. They're still in the second person, but "you" is now something of a fiction. These sentences read as if the listener had the power to prevent the poor from eating cake, or to prevent Kathy from solving her own problems, and the speaker is exhorting "you" to refrain from exercising this power.


Because the primary and secondary patterns provide such strong, terse ways of expressing a speaker's wish by commanding that someone else stop preventing it from being realized, and because English lacks any other good way to express this, we use the same sentence pattern to announce the speaker's unwillingness to help realize other people's wishes, by pretending that the listener has the power to stop the other people from acting on their own! Thus, if the listener "lets" the other people act on their own, the speaker doesn't need to act on their behalf. This might seem a little silly, even a little stretched, but this is how fictions of language, politeness, law, etc. work.


Stretch without guilt



Let him who is without sin cast the first stone.



This one is addressed to a crowd who is about to stone a woman for committing a sin. "You" is everyone in the crowd. The word "him" addresses each individual within the crowd. The sentence actually uses "him" to mean "you" (singular), a member of "you" (plural, everyone in the crowd).


The primary sentence pattern is actually a wonderful fit to the situation. It "lets" the same strong terseness and clarity of expression apply to a situation that would be clumsy to describe precisely using more ordinary sentence forms: "To each person in the crowd: if you are without sin, cast the first stone; if you have sinned, then wait for that first person to cast a stone before you cast a stone yourself." The original expression has everything you need: a "you" that addresses the crowd, a person mentioned in the objective case who is to be allowed to do something, and a verb in the infinitive for the action to be allowed. The sentence exhorts the crowd to refrain from preventing the person without sin from casting the first stone—by waiting for that person to go first. It thereby exhorts each individual to restrain himself from casting any stones at all, since whoever went first would be declaring himself sinless. English grammar may be chaotic, but sometimes it's brilliant.


O let there be stretching




Let there be peace on Earth.



One thinks, at least as a polite fiction, that peace on Earth would surely happen but for various evils and failings that prevent it, since everyone would like peace on Earth. The sentence follows the basic pattern of pretending that "you" have the power to allow the desired action to happen (or, in this case, the desired situation to exist). But who is "you" here? It's not the person you're talking to, because the sentence doesn't exhort the listener to do anything. It just expresses a sentiment, though presumably one shared by everyone. One could argue that "you" is all of humanity, but I don't see it that way. Maybe it's addressed to God. You can address God in any situation, and maybe God has the power to make peace on Earth. It would be natural to say "O let there be peace on Earth", using "O" in its traditional sense of addressing a high-ranking noble or a deity.


But wait a minute. Here's one of the most well-known "let" sentences of all, from the book of Genesis:



Let there be light!



God said this himself! This is only the third line in the creation myth, and nobody but God exists yet, so God can't be addressing anyone but himself. So what is God doing demanding that he stop preventing himself from allowing light to exist?


Well, maybe that's exactly what God is doing. Interestingly enough, the story is that God creates light and all the rest by saying it into existence.



In Latin, the sentence is Fiat lux. (The Latin sentence is well-known to English speakers.) Unlike English, the Latin grammar is completely straightforward: fiat is the third-person passive subjunctive of facere, "to make". The subjunctive in Latin expresses an imagined situation, in this case a situation desired by the speaker. It's very hard to translate this into English without using "let", but here goes: And God said "That light be made", and light was made: Dixitque Deus fiat lux et facta est lux.


Sure, Latin, that's utterly clear and unambiguous. But where's the saying into existence? (Yes, I am addressing Latin in the second person.) English won't permit us to state a desired, imagined action without invoking a fictitious "you" with the power to grant--I mean, prevent—it, but that actually fits the myth better than straightforward grammar could. English grammar forces you to imply that God is talking to himself, and that this self-exhortation releases his power to create—which is exactly how Genesis explains the origin of the universe!


Well, I hope you enjoyed that. Of course, really, in phrases like "Let there be light!", "Let freedom ring!", "Let all citizens have the right to vote", etc., the primary meaning of "neglect to prevent" is only a hazy echo, barely registering for native speakers, and the feeling of addressing someone in the second person is quite weak, though continually reinforced by the grammatical form. One could also categorize "let" in these phrases as a sort of particle that conjugates like a verb—or one could invent still more grammatical theories.


Let this be a lesson to you


I think there is a broader lesson to be learned from all this, which goes beyond "let". The notion of a primary meaning or a primary usage, which gets stretched to cover situations that it doesn't fit perfectly, is a key concept of English grammar. It gives you a way to understand how English grammar is consistent in its own, quirky way, without trying to see it exclusively in terms of rules. English grammar works partly by rules, partly by compromises and making-do with not enough grammatical words and affixes. (A strange problem for a language with such a gigantic vocabulary.) Stock phrases acquire conventional meanings that stretch or trump the literal or grammatical interpretation of the individual words, and then the stock phrases get stretched, too, and on it goes. The extended senses maintain the original grammar as a sort of public fiction.


When you can understand a stretch or a compromise as just that, then you can stop searching for rules that aren't there. And when it comes to puzzling over super-precise grammatical categories when none of them quite fits, you can merrily just let it go.


unity - How can I launch a GameObject at a target if I am given everything except for its launch angle?



I am trying to launch an object at a target, given its position, its target position, the launch speed, and the gravity. I am following this formula from Wikipedia:


$$ \theta = arctan \bigg( \frac{v^2 \pm \sqrt{v^4-g(gx^2 + 2yv^2)}}{gx} \bigg) $$


I have simplified the code to the best of my ability, but I still cannot consistently hit the target. I am only considering the taller trajectory, of the two available from the +- choice in the formula.


Does anyone know what I am doing wrong?


using UnityEngine;

public class Launcher : MonoBehaviour
{
public float speed = 10.0f;


void Start()
{
Launch(GameObject.Find("Target").transform);
}

public void Launch(Transform target)
{
float angle = GetAngle(transform.position, target.position, speed, -Physics2D.gravity.y);
var forceToAdd = new Vector2(Mathf.Cos(angle), Mathf.Sin(angle)) * speed;
GetComponent().AddForce(forceToAdd, ForceMode2D.Impulse);

}

private float GetAngle(Vector2 origin, Vector2 destination, float speed, float gravity)
{
float angle = 0.0f;

//Labeling variables to match formula
float x = Mathf.Abs(destination.x - origin.x);
float y = Mathf.Abs(destination.y - origin.y);
float v = speed;

float g = gravity;

//Formula seen above
float valueToBeSquareRooted = Mathf.Pow(v, 4) - g * (g * Mathf.Pow(x, 2) + 2 * y * Mathf.Pow(v, 2));
if (valueToBeSquareRooted >= 0)
{
angle = Mathf.Atan((Mathf.Pow(v, 2) + Mathf.Sqrt(valueToBeSquareRooted)) / g * x);
}
else
{

//Destination is out of range
}

return angle;
}
}

Answer



I'm a bit skeptical of using atan here, because the tangent ratio shoots off to infinity at certain angles, and may lead to numerical errors (even outside of the undefined/divide by zero case for shooting straight up/down).


Using the formulae worked out in this answer, we can parametrize this in terms of the (initially unknown) time to impact, T, using the initial speed of the projectile:


// assuming x, y are the horizontal & vertical offsets from source to target,

// and g is the (positive) gravitational acceleration downwards
// and speed is the (maximum) launch speed of the projectile...

b = speed*speed - y * g
discriminant = b*b - g*g * (x*x + y*y)

if(discriminant < 0)
return CANNOT_REACH_TARGET; // Out of range, need higher shot velocity.

discRoot = sqrt(discriminant);


// Impact time for the most direct shot that hits.
T_min = sqrt((b - discRoot) * 2 / (g * g));

// Impact time for the highest shot that hits.
T_max = sqrt((b + discRoot) * 2 / (g * g));

You can choose either T_min or T_max (or something in-between if you want to fire with speeds up to but not necessarily equal to some maximum)


Example trajectories


(T_min is the shallow red trajectory at the bottom, and T_max is the tall green one. Any trajectory between them is viable at some feasible speed. When the two merge into the yellow trajectory, the object is out of range.)



Now that we've calculated a value for T, the rest is straightforward:


vx = x/T;
vy = y/T + T*g/2;

velocity = (vx, vy);

You can use this velocity directly (it has a length equal to speed by construction), or if you really need to know the angle, you can use atan2(vy, vx)




Edit: to make this applicable to more cases, here's a 3D version:


Vector3 toTarget = target.position - transform.position;


// Set up the terms we need to solve the quadratic equations.
float gSquared = Physics.gravity.sqrMagnitude;
float b = speed * speed + Vector3.Dot(toTarget, Physics.gravity);
float discriminant = b * b - gSquared * toTarget.sqrMagnitude;

// Check whether the target is reachable at max speed or less.
if(discriminant < 0) {
// Target is too far away to hit at this speed.
// Abort, or fire at max speed in its general direction?

}

float discRoot = Mathf.Sqrt(discriminant);

// Highest shot with the given max speed:
float T_max = Mathf.Sqrt((b + discRoot) * 2f / gSquared);

// Most direct shot with the given max speed:
float T_min = Mathf.Sqrt((b - discRoot) * 2f / gSquared);


// Lowest-speed arc available:
float T_lowEnergy = Mathf.Sqrt(Mathf.Sqrt(toTarget.sqrMagnitude * 4f/gSquared));

float T = // choose T_max, T_min, or some T in-between like T_lowEnergy

// Convert from time-to-hit to a launch velocity:
Vector3 velocity = toTarget / T - Physics.gravity * T / 2f;

// Apply the calculated velocity (do not use force, acceleration, or impulse modes)
projectileBody.AddForce(velocity, ForceMode.VelocityChange);

Procedural world generation oriented on gameplay features


In large procedural landscape games, the land seems dull, but that's probably because the real world is largely dull, with only limited places where the scenery is dramatic or tactical.



Looking at world generation from this point of view, a landscape generator for a game (that is, not for the sake of scenery, but for the sake of gameplay) needs to not follow the rules of landscaping, but instead some rules married to the expectations of the gamer. For example, there could be a choke point / route generator that creates hills ravines, rivers and mountains between cities, rather than the natural way cities arise, scattered on the land based on resources or conditions generated by the mountains and rainfall patterns.


Is there any existing work being done like this? Start with cities or population centres and then add in terrain afterwards?


The reason I'm asking is that I'd previously pondered taking existing maps from fantasy fiction (my own and others), putting the information into the system as a base point, and then generating a good world to play in from it. This seems covered by existing technology, that is, where the designer puts in all the necessary information such as the city populations, resources, biomes, road networks and rivers, then allows the PCG fill in the gaps.


But now I'm wondering if it may be possible to have a content generator generate also the overall design. Generate the cities and population centres, balancing them so that there is a natural seeming need of commerce, then generate the positions and connectivity, then from the type of city produce the list of necessary resources that must be nearby, and only then, maybe given some rules on how to make the journey between cities both believable and interesting, generate the final content including the roads, the choke points, the bridges and tunnels, ferries and the terrain including the biomes and coastline necessary.


If this has been done before, I'd like to know, and would like to know what went wrong, and what went right.



Answer



Here's a great example of procedural terrain generation, using parameters like moisture, height etc... http://www-cs-students.stanford.edu/~amitp/game-programming/polygon-map-generation/


conjunctions - "I have no question"--> "Me, too" or "Me, either"?



Ok, at the end of an English class, the teacher says "Do you have any questions?"



Student A: I have no question


Student B: Me, too / Me, either



So, the Student B should say "Me, too" or "Me, either"?



I know that "Me, either" is American English: "Me, either" is American English: Dictionary Link.


EDIT (after comments): So, we can't use "Me, too" in this case?


Note: my question is unique. I know the rule of using "Me, too" or "Me, either". "Me, too" for positive sentence & "Me, either" for negative sentence.



Ex: I have a question. Me, too


Ex2: I don't have a question. Me, either.



So if we say "I have no question" then which one we should use "Me, too" or "Me, either"?



Answer



This is very tricky, and I think this question deserves its own answer.



The best way for Student B to chime in really depends on how Student A answers the initial question:





  • Student A: I have no question.
    Student B: Me, neither.




  • Student A: I don't have any questions.
    Student B: Me, either.





  • Student A: No questions from me!
    Student B: Me, either.




  • Student A: I have some questions.
    Student B: Me, too.






I'm having a hard time trying to figure out when it's better to use Me, either or Me, neither.


Monday, March 28, 2016

c++ - 2D soft-body physics engines?


Hi so i've recently learned the SFML graphics library and would like to use or make a non-rigid body 2D physics system to use with it. I have three questions:


The definition of rigid body in Box2d is



A chunk of matter that is so strong that the distance between any two bits of matter on the chunk is completely constant.



And this is exactly what i don't want as i would like to make elastic, deformable, breakable, and re-connection bodies. 1. Are there any simple 2D physics engines, but with these kinds of characteristics out there? preferably free or opensource?


2. If not could i use box2d and work off of it to create it even if it's based on rigid bodies?


3. Finally, if there is a simple physics engine like this, should i go through with the proccess of creating a new one anyway, simply for experience and to enhance physics math knowledge? I feel like it would help if i ever wanted to modify the code of an existing engine, or create a game with really unique physics.



Thanks!




tools - What do you look for in a scripting language?




I'm writing a little embedded language for another project. While game development was not its original intent, it's starting to look like a good fit, and I figure I'll develop it in that vein at some point.


Without revealing any details (to avoid bias), I'm curious to know:


What features do you love in a scripting language for game development?


If you've used Lua, Python, or another embedded language such as Tcl or Guile as your primary scripting language in a game project, what aspects did you find the most useful?




  • Language features (lambdas, classes, parallelism)





  • Implementation features (performance optimisations, JIT, hardware acceleration)




  • Integration features (C, C++, or .NET bindings)




  • Or something entirely different?





Answer




I'm looking for two things- speed, and integration. Usually the two go together, and with familiarity. Unfortunately, for C++, there are pretty much no languages that offer speed and integration. I've used Lua and it sucked, horrifically. I spent the whole time writing bindings and nowhere near enough time actually writing code.


Language features? The point of embedding a scripting language is not so that it can have whizzy dynamic language features that my original language didn't have, it's so that it can be interpreted at run-time. I really don't care beyond that, as long as it's basically functional, then that's fine- and fits with my host language (in this case C++). However, amazingly, languages that are designed to be integrated into host applications utterly fail the the part about integration.


Do I need co-routines? No, I do not need co-routines. Do I need dynamic typing? No, I need to know what types are coming back at me from my scripting language, and since all of my existing code is built around very strong typing, I'd really like my script code to be able to respect that too. Do I need garbage collection? No, my types already manage their own resources, and I definitely do want deterministic destruction. Do I want goto? No- I want to throw exceptions.


The trouble I found was that basically all the existing scripting languages were designed to extend C, not C++, and don't properly support the C++ model in many ways, and in addition to this, they have totally different semantics. How on earth am I going to translate shared_ptr, which is automatic deterministic destruction, into a garbage-collected environment? You can write whatever wrapping libraries you want, you won't change the underlying language semantics being incompatible with the language you're trying to extend with it. How can I ensure that this void* is the right type? How can I deal with inheritance? How do I throw and catch exceptions? It just doesn't work.


A good scripting language for C++ would be statically typed, value semantics, deterministically destructed, throw and catch exceptions and respect my destructors/constructors/copy constructors, because then all my types will just work, nice and easy, and the resulting language will be fast and support all my original semantics, easy to bind to.


Sunday, March 27, 2016

word usage - What is the meaning of "I am so fly"?


I often hear teenagers saying things like "I am so fly".


This term is confusing to me.


Does it mean funny? Or dumb? I do'nt know.



Answer



When you come across slang terms whose definition are hard to find, the Urban Dictionary can be a friend. (Just beware the amount of obscenity in it).


fly




cool, in style


He was drivin some fly ass car



I'm so fly



The rapper way of saying that you are way cool


I'm so fly chicks wanna bang me when I drive by



Since definitions in the Urban Dictionary can be written by anyone (and sometimes I think 14-year-old boys write a lot of them), it's safer and more reliable to stick to professional dictionaries for anything other than hard-to-find definitions of slang. I mean, use TUD as a last resort, probably.


opengl - glsl directional light specular seems to be relative to origin not camera


I've been using the LearnOpenGL tutorials to calculate directional light. The diffuse works perfectly fine but the specular seems only work properly when the camera is near the origin and the specular is applied the same to all objects.


Specular applied to all objects


When I look at the second to last object i expect the specular to be the same as the object around the origin but instead it seems reacts to the origin and not the camera.


Specular not working as intended


My guess is it's something to do with converting something to model space instead of camera space but as i'm new to glsl i'm lost as to where to look first.


Vertex shader:



#version 330 core

layout (location = 0) in vec3 vertexPosition;
layout (location = 1) in vec2 vertexUV;
layout (location = 2) in vec3 vertexNormal;

out vec3 VectorWorldPosition;
out vec2 UV;
out vec3 Normal_cameraspace;


uniform mat4 MVP;
uniform mat4 V;
uniform mat4 M;


void main()
{
gl_Position = MVP * vec4(vertexPosition, 1.0f);

VectorWorldPosition = (M * vec4(vertexPosition, 1.0f)).xyz;


Normal_cameraspace = vertexNormal;//(M*V* vec4(vertexNormal, 1.0f)).xyz;

UV = vertexUV;
}

Fragment Shader:


#version 330 core

in vec3 vertexWorldPosition;

in vec2 UV;
in vec3 Normal_cameraspace;

out vec4 color;

struct DirectionalLight{
vec3 position;
vec3 ambient;
vec3 diffuse;
vec3 specular;

};

struct Material{
sampler2D diffuse;
sampler2D specular;
float shininess;
};

uniform DirectionalLight dirLight;
uniform Material material;


uniform vec3 viewPos;


vec3 CalcDirectionalLight(DirectionalLight light, vec3 normal, vec3 viewDir);

void main()
{
vec3 output = vec3(0.0f);


vec3 norm = normalize(Normal_cameraspace);

vec3 viewDir = normalize(viewPos - vertexWorldPosition);

output += CalcDirectionalLight(dirLight, norm, viewDir);

color = vec4(output, 1.0f);
}

vec3 CalcDirectionalLight(DirectionalLight light, vec3 normal, vec3 viewDir)

{
vec3 lightDirection = normalize(light.position);

float diff = clamp(dot(normal, lightDirection), 0.0, 1.0);

vec3 reflectDir = reflect(-lightDirection, normal);

float spec = pow(max(dot(viewDir, reflectDir), 0.0), material.shininess);

vec3 ambient = light.ambient * vec3(texture(material.diffuse, UV));

vec3 diffuse = light.diffuse * diff * vec3(texture(material.diffuse, UV));
vec3 specular = light.specular * spec * vec3(texture(material.specular, UV));

return (ambient + diffuse + specular);
}

I'm using light position here when the tutorial asks to use direction. My thinking is that relative to the origin, position will be equal to direction. This works fine for diffuse so it should it work for specular.



Answer



You need to make sure that all your lighting calculations are done in the same space. Assuming your VertexNormal input is in model space, you can't just output it and call it 'Normal_CameraSpace'.


Decide where all your maths will be done. If it's in 'World' Space, then you need your normals to be transformed by the Inverse Transpose of the world matrix. Or the IT of the WorldView matrix if in camera space.



Your light vector needs to be transformed to the same space too.


And if you are calculating a vector to the viewer, then you need the world space location of your camera.
If you are in view/camera space, then your camera is always at the origin.


And of course, you need to transform the vertex position to the same space as well. I find it easier to think in 'WorldSpace'. You might want to start there...


Vertex Shader:
// World space position
vec4 world_pos = M * vec4(in_position, 1.0);

// perspective transformed position
gl_Position = MVP * vec4(in_position, 1.0);


// Matrix for transforming normals
mat4 invtransmodel = transpose(inverse(M));

world_normal = normalize((invtransmodel * vec4(in_normal, 0.0)).xyz);

legal - Are there legality issues using a company's name in a game?


I'm working on an RPG of sorts where the character has a "day job" that determines how much money they make each time the day cycles and other things like that.


Are there any legality issues with using the names of companies like "Barnes & Noble" or even "Apple"?



Answer



I am not a lawyer.


But almost certainly the answer is "yes."


"Apple" is a registered US trademark of Apple, Inc. for example (and presumably so is Barnes & Noble and the names of almost any other major company you can think of), and many companies may have trade or service marks registered in other countries as well. Usage of those marks is subject the constraints of the relevant trademark law. In some cases, depending on what you want to use, there may be copyright issues involved as well.



See here and here for more information US patent, trademark and copyright law.


The definite article before nouns derived from verbs



Tell me please which of the following sentences is correct.



1 "His actions have raised concerns about monopolization of the industry."
2 "His actions have raised concerns about the monopolization of the industry."



I think the second is correct because "monopolization" is modified by "industry", but I have noticed sometimes people drop "the" before such sentences that is why I am confused by the usage of "the" before nouns from verbs and "of phrase".



Answer



They are both correctly-formed sentences, but given the context, the first is probably correct.


To explain, the first implies his actions have caused concerns to be raised that monopolization of the industry in question is happening. The second implies that his actions have caused concerns to have only just now been raised about the previously known about monopolization of the industry in question. Whilst the second scenario is not impossible, the first seems more likely. I should add that these implications are not 100% the only way of reading the two sentences, but seeing them side-by-side that is what the difference wold seem to imply.


The definite article in front of "monopolization" changes how that word is being used so it is not (directly at least) connected to the definite article in front of "industry", which changes how the word "industry" is being used.



architecture - How to nest one Unity project into another?


I'm creating an AI course in Unity. With regard to my question, there are two important properties of the course:



  1. Each tutorial is a separate Unity project that can be loaded up, allowing the student to start anywhere in the course and have a project that mirrors the project covered in the video they're watching.

  2. Each tutorial builds off of previous tutorials. For example, tutorial one is just the base world provided with the course. Tutorial 5 includes all the code and assets introduced in tutorials 1-4, from the base world up.


My question is, how do I manage the nesting of Unity projects, without a lot of manual updating and copying?


Say, for example, when I'm creating the content for tutorial 5, I find a bug in the code created in tutorial 2. I don't want to have to update tutorials 2,3,4 and 5. I want to fix the bug in tutorial 2 and have it propagate through all the future tutorials that utilize that code or asset.


Essentially, each tutorial is a milestone on the way to the completed project. The completed project is the sum of all the tutorials and will contain all of the code, assets, prefabs, etc.


You can think of this like a dependency tree. Tutorial 5 depends on the code and assets of tutorial 4, which depends on the code and assets of tutorial 3 and so on. If this were a Visual Studio project, I'd create a new project and configure the dependencies under one solution.



enter image description here


In this example, the lines represent files that are identical to each other. Changes to the file in its origin project should update the file in subsequent projects. Ideally, a change to the file in any project would update all the others, but it's not a required feature


I am, of course, using source control as well. So there may be a trick or two I can utilize there, but I'm not sure.


Some things I don't think will work:



  1. Exporting code as a DLL, to be used in the next project. I don't think this will work because I don't want the student to have to do this with their own project. That means a student going through the course one tutorial at a time would have a different code structure than the student who started part way through.

  2. Creating Unity packages. This would maintain the code, assets and prefabs, but (as far as I can tell) would require me to re-export and re-import packages for each project subsequent the changed project. But I believe this has the most potential if there's a way to automate the process.


Nesting projects like this would also be useful in other situations. Like a developer who's built up a core project they want to start all of their projects from. They could have 5 game that all have the same "base project" and each of the 5 game would benefit from bug fixes or feature adds that go into the base project, meaning the change is only made once.



Answer




The new Unity Package way


Unity has recently introduced a Package Manager. This is a system that most Unity components will be moving to, and it also allows you to create your own packages. Interestingly enough, these packages can be the entire contents of one Unity project, allowing you to nest one Unity project inside of another. There are two requirements:




  1. In the nested package, include a package manifest file in the directory you define as a package, this is in the form of a package.json file. Everything in the same directory as your package.json becomes part of the package. See "Package Manifests" under the advanced packaging section of this document




  2. In the encapsulating/parent Unity project, add a reference to your nested package in the manifest.json file. The manifest.json defines all the packages for your Unity project. At the moment, it must be manually edited to include this reference, but future UIs of Unity's Package Manager may include the ability to do this in Editor. See "Project manifests" under the advanced package section of this document.





When those conditions are met, you'll see the contents of the nested package included as a package in your parent package. This is superior to the below "The Unity way" because it's bidirectional. Change made to your nested package scripts/assets are made in the nested project, regardless if they're made from the nested Unity project Editor or the parent Unity project Editor.


I have see some issues with this strategy in regards to debugging. It's possible this is simply because the packaging system is still early in development. Sometimes when debugging scripts in the parent package, stepping into a class that's defined in the nested project will instead step into the script assembly Unity compiled for the package. This means only compiled meta data is available for the class instead of actual source (even though it's right there).


The Unity way


Automate the export and import of package files. This can be done using the command line options for Unity and a batch script file, something like the following script (line breaks added for readability):


Unity -projectPath "C:\Code\AI\Base\BaseWorld"
-exportPackage "Assets\Scripts" "C:\Code\AI\BaseWorldPackage.unitypackage"
-quit
Unity -projectPath "C:\Code\AI\Ch01\Chapter1"
-importPackage "C:\Code\AI\BaseWorldPackage.unitypackage"
-exportPackage "Assets\Scripts" "C:\Code\AI\Chapter1.unitypackage"

-quit

This works great for selecting assets you want to be carried forward from previous projects. Each project adds its own assets, plus the assets imported from the previous project and produces its own package to be consumed.


The caveat here is that it's one-way. It's very easy to change a file in ProjectN+2, even though that file comes from ProjectN. That change will be overwritten the next time the script runs, because the change wasn't made in ProjectN, and all the updates only go in one direction.


The file system way


Hard links. Create hard links for each of the resource files you want to carry forward. This allows file content to be synchronized in two directions, because any edits to the file are essentially editing the same file. There are tools that can assist in the creation of these links to make the process easier.


The caveat here is that these structures are not easily managed by source control. Some source control systems don't work well with hard links or symbolic links. Additionally, they won't recreate the link structure when cloning a repository. Also, file renaming isn't well supported here.


The source control way


Utilize subrepositories or svn:externals. These are essentially one repository nested inside of another. This can get tricky, since these structures are commonly at the directory level. This means you can't add other files to the directories being nested. It makes for a somewhat of a splintered project. Additionally, to get the latest updates, source needs to be checked in and then out to the nested repos.


The robocopy way



Create a script to copy the files. Robocopy has a number of options for this. The script below has the following properties: 1. Copies all files and directories from BaseWorld to Chapter1 2. Synchronizes the contents of the contents of files between BaseWorld and Chapter1 (depending on which is newer) 3. Allows additional files to be in Chapter1 and don't get copied back to BaseWorld


robocopy "C:\Code\AI\Base\BaseWorld\Assets\Resources"
"C:\Code\AI\Ch01\Chapter1\Assets\Resources"
/E /XO /XF *.meta
robocopy "C:\Code\AI\Base\BaseWorld\Assets\Scripts"
"C:\Code\AI\Ch01\Chapter1\Assets\Scripts"
/E /XO /XF *.meta

robocopy "C:\Code\AI\Ch01\Chapter1\Assets\Resources"
"C:\Code\AI\Base\BaseWorld\Assets\Resources"

/XO /E /XX /XL
robocopy "C:\Code\AI\Ch01\Chapter1\Assets\Scripts"
"C:\Code\AI\Base\BaseWorld\Assets\Scripts"
/XO /E /XX /XL

The caveat here is that it doesn't really support file renames. Renaming a file or directory will result in both the old copy of the file and the renamed file existing in subsequent directories.


Currently, there doesn't appear to be a single solution to make the process easy and transparent.




Final solution


I ended up using the robocopy way. It's two steps, the copy back and the copy forward.



Copy back. I start by copying any files that have been modified in project N to project N-1. Only files that exist in project N-1 are copied back from project N. This means files and directories that exist in later projects, don't get copied back to previous projects. And it means I can make changes to files in whatever project I happen to currently be using, and it will be updated all the way back to the project it originated in. Meta files are ignored in the copy back.


Copy forward. After copying back, I copy everything forward through the projects. This is a bit more liberal, if a file exists in project N-1 it will be copied forward to N. This ensures that any changes that were copied back from a project in the middle of the chain of projects, they'll now be copied forward to the latest project. Meta files are copied forward. Additionally, .asset files are copied forward too. This includes project settings, layers, sorting layers and tags.


This two step approach ensures that anytime a file changes, it's updated throughout the project chain. There were some growing pains when refining which file types to include and which to exclude. The project settings should make meta files visible and in plain text.


I found this approach to be the most seamless. I can run this script even when I have Unity projects open (unlike using Unity packages). And it's much faster than exporting and importing packages using Unity. I don't have to check in code or take a lot of time to create new linked files. I've been pretty happy with it so far. I have about 10 projects linked in this way right now (with dozens more to come). The script takes about 8 seconds to run through.


Implementation details


I created a little C# project to handle the execution of the robocopy calls. I tried doing it with batch scripts, but that was more work than it was worth :). I pass in the root directory of my projects. They're organized like so:


Root
Ch01
01 Project
02 Project

03 Project
Ch02
01 Project
02 Project
Ch03
Etc.

I parse this structure to create a list of projects to iterate through.


Copy back, for each project, starting at the tip and moving to the base:


for(int i = projects.Count-1; i > 0; i--)

robocopy projects[i] projects[i-1] /XD "Library" /XO /E /XX /XL
*.cs *.shader /XD Library /XF *.asset *.unity

Copy forward, for each project, starting at the base and moving to the tip:


for(int i = 0; i < projects.Count - 1; i++)
robocopy projects[i] projects[i + 1] /E *.cs *.prefab *.controller *.anim
*.png *.mat *.shader *.meta /XD Library
robocopy projects[i]\ProjectSettings\ projects[i + 1]\ProjectSettings\ *.asset

opengl - Lighting in a Minecraftian World


Minecraft is a game that is largely based on a heightmap and uses that heigtmap information to flood the world with light. From my understanding the highest point in the heightmap is the end of the sunlight influenced area. Everything above that is lit by sunlight, everything below that just is influenced by light nearby in a radius of 8 blocks.


Thus if you have a floating island on the top of your world everything below that will be seen essentially as a cave. When two lights influence the same point the brighter light wins (unsure about that).



Either way there are a couple of problems with minecrafts lighting model: first of all, if your world does not have a heightmap it becomes trickier to figure out what exactly is supposed to emit sunlight and what not. A simple way would be to assume that the world is (in my case) a floating rock and then traverse each axis from both directions and figure out where the rock starts and ends. But this does not fully eliminate the problem as dents in the rock are not supposed to be in darkness.


Minecraft itself will cache the light information in its chunks together with the information about the material of a block. Thus only if the world is modified the lighting has to update. Unfortunately that process is still pretty slow on updates and on quick light changes one can see the lighting lag behind. That's especially true if a lot of blocks change (TNT, sunset etc.) and you're not running the fastest computer (Or Java on Mac).


From my still limited understanding of 3D graphics lighting a world like minecraft shouldn't be the biggest issue. How would you tackle the problem?


I think the basic requirements for lighting in a voxel world would be



  1. update fast enough that it could happen in a single frame. One might be able to do the lighting in the graphics device and download the changed light information to the main RAM.

  2. light information must be quickly available for the main game logic so not entirely based on the graphics device: reasoning: light affects the growth of grass, spawning of monsters etc.

  3. light updates would have to be local to a chunk or have some other limit so that one does not have to relight the whole world which might be very large in size.


The main idea would be to make the light updates fast, not necessarily more beautiful. For general light rendering performance improvements one could easily add SSAO on top of that which should result in much nicer worlds.





prepositional phrases - Use/nonuse of articles in 'a state of residual charge of battery of a microphone at a...'


I know you put 'a', 'an', or 'the' for countable nouns.



But I sometimes see some phrases without the articles when they are supposed to be included.


For example,



a state of residual charge of battery of a microphone at a ..



I think that the battery is a countable noun so it should be like "a state of residual charge of a battery of a .."


But readability is much better without 'a' in front of the battery.


So my question is, do you omit articles when there are too many "of" in one sentence?




[Unity] Render an animated texture to a screen


I'm looking for a way to write a typed text on a texture, and then render it on a screen: in the game it's just the screen of a computer where a scientist is typing text on.


The texture will be already generated: it will have several textures in it.


I can't find any obvious way to do a simple render to texture with unity, how could I do that ?




c++ - Fast, accurate 2d collision


I'm working on a 2d topdown shooter, and now need to go beyond my basic rectangle bounding box collision system.


I have large levels with many different sprites, all of which are different shapes and sizes. The textures for the sprites are all square png files with transparent backgrounds, so I also need a way to only have a collision when the player walks into the coloured part of the texture, and not the transparent background.


I plan to handle collision as follows:




  1. Check if any sprites are in range of the player

  2. Do a rect bounding box collision test

  3. Do an accurate collision (Where I need help)


I don't mind advanced techniques, as I want to get this right with all my requirements in mind, but I'm not sure how to approach this. What techniques or even libraries to try. I know that I will probably need to create and store some kind of shape that accurately represents each sprite minus the transparent background.


I've read that per pixel is slow, so given my large levels and number of objects I don't think that would be suitable. I've also looked at Box2d, but haven't been able to find much documentation, or any examples of how to get it up and running with SFML.



Answer




  1. Step one, create a grid and update it for every object that moves.

  2. Only check for collisions between objects in the same squares.


  3. Check if the bounding box of the objects intersects (their containing rectangle).

  4. Check for pixel perfect collision using a low res version of the outline(see Game Physics).

  5. Do a normal check of the outline tracing as described in Game Physics (Q 2)


Step 1:


Create a grid 2d array. Every object knows which squares it occupies by it's x,y position and it's width and height. If an object is moved away, it clears itself from the old square and updates the new square that it's occupying.


This only takes O(n) in total for n objects. For any specific object O(1).


Step 2:


Run all the checks for collisions between objects in the same squares. No need to run tests for collisions between objects in different squares. An object can occupy up to four squares if it is of average size. This means very few checks.


Step 3:



Check for intersection between the objects rectangles. If no intersection exists, stop.


Step 4:


Check for pixel perfect collisions between the outlines of the objects only inside the area of intersection. It should be fast enough. If not, create a low res 2d-boolean array and check it first, if you find collisions there, you would only need to check a small segment in the high res 2d-array saving you some precious time.


Please read this for concept on how to split your game world into a grid of squares:


Making an efficient collision detection system


Please read this for intuition on how to detect pixel perfect collisions.


Game physics / 2D Collision detection AS3


You can improve performance significantly:





  1. Saving a low res (1 / 16) version of the outline to check against first.




  2. Only checking in the area where the two rects intersect.




  3. by dividing the outline roughly into segments, and only checking for collisions between segments first.




Please feel welcome to comment and I will elaborate.



check in the area of intersection


Saturday, March 26, 2016

modding - How do I add a custom mob to Minecraft?


Basically decided to make my own mob, I have:



  • Created my mob's entity class

  • Created my mobs model class

  • Drawn the model

  • Added the function call for addMapping within the EntityList class



I'm stuck on what to do next. I've tried finding the code that deals with passive animal spawning in the world, however I can't seem to find it.


Help greatly appreciated.




Static Meshes vs 3D models


What is the difference between static meshes and regular 3D models. I am very new to game development, and I am using UDK (Unreal Development Kit).



Answer



According to UDN's page on static meshes:


"A Static Mesh is a piece of geometry that consists of a set of polygons which can be cached in video memory and rendered by the graphics card."


There really isn't much of a difference, other than that static meshes as UDK understands them are stored inside a package, (often if not always) associated with a material and a collision mesh, and usually have LODs which are themselves separate "regular 3d models" (though these can be automatically generated by Simplygon from inside the UDK static mesh editor).


I suspect you're asking "How can I import my 3d model into UDK, and use it as a static mesh?" in which case, there are a number of ways. The basic rundown is:



  1. Find a plugin for your 3D package that supports exporting ASE or FBX format.


  2. Import your ASE/FBX into a package as a static mesh.

  3. If necessary, associate a material and collision mesh with it.


Of course, there are finer details to consider regarding UV setup for Lightmass, smoothing groups, etc., but those will all depend on your 3d authoring tool.


If you're just getting started, I would recommend taking a look at 3dbuzz' "Creating a Simple Level" video series. It requires (free) registration, but they cover all the highlights of what goes into using your assets in the UDK.


algorithm - Efficient path-finding on 2D tile-based multilevel map


It's a question I've been thinking about for some time... How do you effiently find a path on a 2D tile-based multilevel map? The map I use, for example, is 2048 on 2048 tiles wide. It has 14 levels and levels are connected by stairs, ladders, rope holes, ...


How would you introduce level-changing tiles into A* in an efficient way? I know it is possible to add multilevel path-finding by just adding edges from an up-node to the corresponding down-node. But then path-finding isn't very efficient.


For example. What if the current node (e.g. [100, 100, 7]) is directly under the goal (i.e. [100, 100, 8]), and we can't go up anywhere near the current node. Instead we first have to go down some levels, and then up again, only then to reach the goal. A lot non-existing paths will be considered (= a lot of time and computation) before we finally find an existing path.


Feedback appreciated, Gillis



Answer



That would depend on what you're using for your heuristic. However, even if you're using shortest distance, the search algorithm will still work. It will spread out evenly until it finds somewhere to go down.


If you're already doing that and just want to find a way to improve the speed, there's a few options you can use.





  • Add precursors to your search. If the result is on the next level down, add a requirement that the algorithm must first find a way down. The heuristic for this can be an average of the distance from all the ways down on the current level. Though you may find a way down that's not connected to part of the lower level you want to go to.




  • Add "reachability" information to your grid. Do a breadth first search on each level after it's created. Mark each tile that's reachable with a "reachability" zone ID. Continue this until every tile is marked. For example, each tile with the ID "1" is reachable from every other tile with the ID "1" on the same level. Now when you're searching for a way down, you can rank the ladders based on which zone they connect to. If the tile you're trying to reach on the next level down is in zone "4" and you have a ladder on your current level that leads to zone "4" you can head for that one directly.




You can even extend the "reachability" idea to extend multiple levels. I would use a separate ID for that. This second ID would basically tell you if one tile was completely disconnected from another, so you could forgo the search entirely pretty quickly.


Basically the best way to speed up the searches is to add more data to your world. With better data your algorithm will be able to make better decisions about which paths to try and which to avoid. All in all, speeding up your search.


Simple past, Present perfect Past perfect

Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...