Thursday, May 31, 2018

usage - Comma before "because"


Looking for One should keep his words both soft and tender on Google, I noticed that in some cases because is preceded from a comma. Can the comma always be added before because, or can it be added just in specific cases?


I am used to sentences like the following, where I have never added a comma before because.



You speak like that because you are unsure of yourself.




Answer




In short, a comma is very well possible in other because sentences, but it would most likely be wrong in your example. A context can be created in which the comma would be right in your sentence, but, assuming a standard context, it would be wrong in your example. In similar but longer sentences, however, the comma can be acceptable.




You probably know how defining and non-defining relative clauses work. In general, you use a comma when it is non-defining, but no comma when it is a defining relative clause:



I dislike Frenchmen, who always stink of garlic.



You dislike all Frenchmen. The main clause could stand on its own: the number of things or people that it refers to (i.e. that the antecedent refers to) does not change at all if you add or remove the relative clause.



I dislike Frenchmen who always stink of garlic.




You dislike only those Frenchmen who always stink of garlic; there may be other Frenchmen who do not stink and whom you do not dislike. This is a defining or limiting clause, because it defines, i.e. changes what the main clause refers to. The relative clause is an essential part of the sentence; it is very strongly linked to the main clause.




With adverbial clauses, such as because, while, etc., it is much less clear and strict, but somewhat similar effects can be observed. I think the difference there is more about focus than about actual defining.



You speak like that, because you are unsure of yourself.



The comma separates the because clause from the main clause; the result is that the statement in the main clause gets a focus of its own, that is, it becomes more like a separate statement: "Hey, you speak like that, did you know that? Oh, and I also wanted to say that the reason why you speak like that is that you are insecure." So you could say there are two somewhat independent statements being made.



You speak like that because you are insecure.




This means, "I wanted to tell you about the reason why you speak like that: it is because you are insecure." There is no focus on the mere fact that I speak like that: this is more or less treated as a known fact or opinion. It is merely mentioned so that I know what you are referring to, so that I know the topic, that is, the background against which you are going to make my statement. When something is just mentioned like that, it is said to be topical.


This distinction between elements in a sentence that are more topical and those that are more focal is very important in pragmatics as a linguistic sub-field.


In this example, it seems unlikely that you want to make two separate statements with two strong foci: I really think you wouldn't phrase it like that if you really wanted to make to semi-independent statements. I rather think you talk like that is best interpreted as topical informal, and then the comma would be wrong.




In very long sentences, however, a comma can be added even though there are no two statements, just to make it easier to read. Similarly, if the because clause comes before the main clause, a comma is often added for the same reason, so it means nothing. Note also that, in very informal texts, people tend to use commas to represent pauses in speech even more than normally, and many casual readers would probably not bat an eyelid at your comma in e.g. a chat room.


All this more or less applies to other conjunctions too, like while, when, although, etc.


raycasting - Determine if ray hits an edge of a model in Unity 3D


How can i determine if a ray was hitting an edge of a mesh?


The figure below shows what I want to achieve:


enter image description here



Any ideas?



Answer



Adjacency information like this isn't needed for most stuff we do with meshes in games, so unfortunately the out-of-the-box representations don't make it easy to access.


The simplest way to do this, especially if you have only a few, simple meshes that need this, is to hand-place a narrow capsule collider along each edge you want to detect with raycasts this way, then fire your raycast against the physics layer containing these edge mark-up shapes.




If you want to automatically detect these edges, I'd recommend applying a pre-processing step to help accelerate the queries. You can do this at edit time or in-game when you load a mesh that needs this metadata.




  1. Iterate over all the mesh's vertices and build a lookup table that associates vertices in the same position with a shared index.





  2. Prepare an associative map edges, keyed by the shared indices of the two vertices it joins (always put the lower index first, so you get the same key regardless of order)




  3. Iterate over all the mesh's triangles and compute a face normal for each one by averaging its vertex normals. Use the edge map to store a reference to this triangle and its normal associated with each edge of the triangle.




  4. When you find two triangles that share an edge, check the angle between their face normals. If it's less than some threshold you choose, we can call this an internal edge and ignore it. If it's greater, then we call this a sharp edge and mark it up for raycast hits as follows. (You could also take into account whether it's a smooth-shaded edge or creased, if you choose)





  5. Build an associative map of crease edges. This will be again be keyed by vertex indices, but we'll use their original indices this time, not the shared alias as we used before. And we'll list them in the key in the order in which they appear consecutively when we wind around the triangle, so the two triangles meeting at an edge give opposite orders (assuming manifold geometry).




  6. We add our newly-discovered sharp edge to this final map. The value we store with this key is a float tolerance range, computed as the maximum tolerance distance you choose for your raycasts (how far from the edge can we hit and still call it an "edge hit"?) divided by the length of the altitude of the parent triangle measured from this edge to the opposite vertex.




Okay, that was exhaustive. But now we can keep this final map structure for each mesh, and throw away the intermediate structures we built along the way.


Now when we get a candidate raycast hit, we can get its triangle index property, and use that to check whether any of the three edges of the triangle are in the mesh's sharp edge map. If so, we get the hit's barycentric coordinate property, and check whether the weight for the opposite vertex is less than the tolerance value we stored in our map for that edge. If so, then we have struck within our tolerance distance of a sharp edge! :D




Okay, so that was a lot of work. One last strategy we could try is somewhat empirical:



When you get a candidate raycast hit, get the triangle normal. Using this, you can predict an expected depth for subsequent raycasts fired from slightly offset positions (as if you were hitting a flat plane). Fire a few such raycasts in the same direction as your original, from positions slightly offset from the first hit. If the depth of any of these raycasts is much larger than the prediction, then you've grazed past an edge, and you can treat the original hit as an edge hit.


do support - May I omit "do" in a dependent clause?


In a sentence like this:



Many people realize that they didn't do any wrongdoings, so they cannot understand why they have to be punished.

or


Many ..., so they cannot understand why do they have to be punished.*



Which sentence is right?



I know in an independent question like "Why do you fight?" we cannot omit the "do". But in a clause like the example I gave above, may I omit the "do"?




past participles - Is "named" an adjective in this sentence?



"Let's pretend that this monkey belongs to a girl named Bianca"



In this sentence, could anyone help me identify what sentence instruction is given using "named" in this sentence?


Is 'named' here an adjective?




conjunctions - 'if', meaning 'even if'. Why would ommision occur in some cases?


From Michael Swan's Practical English Usage 261.10: If, meaning 'even if'



We can use 'if' to mean 'even if'.



I'll finish this job if it takes all night.



I wouldn't marry you if you were the last man in the world.




I wonder if we can omit 'even' in any other 'even if' clauses. If not, when is the omission acceptable without causing any misunderstanding?




word usage - Ain't and negatives


I am puzzled with the use of ain't. I know its meaning, and also know it is pretty informal. But I see it used in several ways, some I think of as conflicting.


See the following examples






I ain't an idiot



expands to



I am not an idiot



Which, in my understanding means "Well, I am declaring myself not an idiot"






It just ain't done good



expands to



It just is not done good



Which, in my understanding means "An action didn't generate good results"






But I ain't marchin' anymore



expands to



But I am not marching anymore



Which, in my understanding means "I am saying, that I will stop marching"





Hey, ain't gonna cry no more today




expands to



Hey, I am not going to cry no more today



This one is a bit weird, but in my understanding means "I am declaring my intentions of crying no more than I already did, today"





I ain't no quitter




expands to



I am not no quitter



Which, in my understanding means that "I am a quitter", since the first negation negates the second...


I really think it means "I am no quitter", but then "ain't" doesn't abbreviate the length of "am", so I don't get the usage of "ain't", or perhaps this phrase ain't a good example, even for an informal contraction like ain't.


Is it correct to use ain't followed by a no? Does the "no" lose its negation?


I really hope I ain't that confused by the answers...




conditional constructions - There are a couple of sentences that I'd like to compare


There are sentences that I'd like to compare.




  1. If you try to take pictures of restricted exhibitions, a member of the staff will ask you to put your camera away.




I'm not sure if you will or want to try it, but you may do this because you are not informed about it or this visit is your first time. Anyway what I'd like to talk about is you will be asked to put you camera away if you try to take pictures. (just giving information)




  1. If you tried to take pictures of restricted exhibitions, a member of the staff would ask you to put your camera away.



I know you are not going to try because you are considerate person or you are not able to do it be cause you don't have a phone or something to take pictures, but if you tried to do it, you would be asked to put your camera away. (giving information more politely)



  • Am I right to put it this way?


  • Do these sentences have the same meaning?



Answer



You have it exactly right.


Here's the explanation.


First, the English verb form for indicating a hypothetical situation, known as "the subjunctive", is not very clear and has only very limited uses. So, to add weight to the interpretation that you are speaking hypothetically, you often put a verb in the past tense—even if you are talking about a future situation! That's what happened with tried in your second sentence. When you speak of a consequence of a hypothesis, you normally precede the verb with one of the -ould modal verbs (would, could, should). That's why the second sentence calls for would ask rather than will ask.


Second, a way to soften a statement in order to be more polite or more deferential, is to use a verb form for a hypothesis or consequence—even if that's not really necessary for the literal meaning. The classic examples are "Would you like some tea?" and "Yes, I would like some tea." So, you are exactly right in understanding that your first sentence, with try and will ask, is more blunt and more forceful than the second sentence, with the hypothetical tried and the conditional would ask. And you are exactly right that this choice of verb moods is a subtle way of communicating the speaker's expectation about whether the listener intends to take pictures.


conjunctions - With that or without "that", which one is more formal to write?


I have this sentence:



There is an engine inside me that keeps saying "someone must be at the top, why isn't that you?"



Should I write that or leave it out?


I know both ways are correct, but I am asking for the most most most most most formal way.




vocabulary - Which one is correct: "Do you wake at seven?" or "Do you wake up at seven?"


Someone said to me that the first one is the correct answer, because you use "wake up" to ask someone to stop sleeping. But I think the first one does not sound natural.



Answer



In modern (American) English, "wake" in your example is generally followed by "up". But this isn't always the case.


I don't always like to use NGrams but here's a helpful image that shows the recent trend:


Ngram for wake up vs wake up at


Of course, this misses some various other phrases that are similar, like the command "wake me at 7 am"... which is fine but can also be phrased "wake me up at 7 am".'


Ngram for wake me up at vs wake me at


So, to answer your question, in modern English, "wake up at" is more appropriate and common but "wake at" is not technically incorrect... it's simply not much in modern use and you may find pockets where it is still quite acceptable.



algorithm - How can I ensure a grid can be filled with Tetris-like pieces?


I'm thinking of making a puzzle game where the objective is to fill a grid with shaped puzzle pieces (for example, the classic Tetris shapes).



How can I go about generating a set of pieces that can be guaranteed to be used to fill the grid, leaving no gaps? How could I adapt this algorithm to scale the difficulty of the resulting puzzle?




Which word is correct, "existed", "existent" or "existing"?


To express that the results already exist, should I say:



  • "the existed result", or


  • "the existent result", or

  • "the existing result"?




countability - Is "people" a countable or a non-countable noun?


I saw these sentences on the Internet:






  1. There are three people here.




  2. A few people didn't enjoy the play.






Now I'm not sure whether people and other collective nouns like team, family and police are countable nouns or uncountable nouns. Is there an explanation for how these words work?




Wednesday, May 30, 2018

unity - (2D) Detect mouse click on object with no script attached


I'm creating a 2D project which does not have a character. So i created an empty gameobject and attached the script to it. In the script, i have declared other objects which are in the scene like this:


public GameObject obj1,obj2,obj3;


How can I know if obj1 or obj2 or obj3 was clicked? Is raycasting the only solution?



Answer



Yes, you can use raycasting to detect the objects in your scene. You don't need to attach a custom script to the game objects you want to detect, but you do need to attach colliders to them.


In the update method of your script, attached to the otherwise empty object, you can check for when the mouse button is pressed. Then, cast a ray into the scene from the camera, through the mouse. Something like the following:


void Update() {
if (Input.GetMouseButtonDown(0)) {
Debug.Log("Pressed left click, casting ray.");
CastRay();
}

}

void CastRay() {
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
RaycastHit hit;
if (Physics.Raycast(ray, out hit, 100)) {
Debug.DrawLine(ray.origin, hit.point);
Debug.Log("Hit object: " + hit.collider.gameobject.name);
}
}

javascript - Energy Bar in Unity



I want to display a debug message when the progress bar is loaded fully and I have a button "refresh", when I click this it should reload the progress bar. Here is the code I have tried:


var energyBar : GUIStyle ;
var bgImage : Texture2D; // background image that is 256 x 32
var fgImage : Texture2D; // foreground image that is 256 x 32
static var playerEnergy = 1.0; // a float between 0.0 and 1.0


function Start() {
}

function Update() {
playerEnergy=Time.time *0.02;
if(playerEnergy<=0)
{
Debug.Log("stopped working");
}

}

function OnGUI () {
// Create one Group to contain both images , the first two numbers define the on screen placement
GUI.BeginGroup (Rect (10,10,256,32));

// Draw the background image
GUI.Box (Rect (0,0,256,32), bgImage, energyBar);

// Create a second Group which will be clipped

// We want to clip the image and not scale it, which is why we need the second Group
GUI.BeginGroup (Rect (0,0,playerEnergy * 256, 32));

// Draw the foreground image
GUI.Box (Rect (0,0,256,32), fgImage, energyBar);

if(GUI.Button(new Rect(100,200,60,30),"Refresh"))
{
// code to Restart the progress bar
}


// End both Groups
GUI.EndGroup ();
GUI.EndGroup ();
}

From the above code I am able to display the progress bar, but the problem is that the "refresh" button is not getting displayed and I need help in reloading the progress bar when refresh button is clicked. I also need to print the debug message when the progress bar is fully loaded, but here the debug message is getting displayed when the progress bar starts loading. Can anybody please help me out.


Here I have attached the texture also


enter image description here


enter image description here





game design - Calculating the output of two armies fighting



I am programming a strategic game using Flash. The game works very similar to the famous game "Travian".


My problem is as follows: I am trying to make the calculation of the troops lost as a result of a fight between two armies. The two armies have different types of units. Some of them are stronger against some other units and weaker against other types.


How can I put that effect of this differences in the equation of the fight?


It seems to be easy if they only have att and def points only, but when it comse to the units type dependency, I am lost.



Answer



In addition to supporting Amit's suggestion of looking at the Lanchester equations, I just want to add that this is a game design decision, not an empirical fact that we can give you. If you want to take unit type into account, you have to decide what that means. This means choosing an equation that includes all the factors that you want your gameplay to include. If you want infantry to be better than cavalry, then you have to decide what that should mean - eg. how many cavalry do you need to equal 100 infantry? And does it matter who attacks who? You seem to be implying that simply giving infantry and cavalry different attack and defence values isn't good enough - why is that? What else are you trying to represent that can't be captured just by those values?


You have to decide which factors you want to model in your game, as they affect the way the players will approach it. These might include unit size/quantity, unit type, unit experience (eg. veteran status), terrain and environmental effects, differences between attacking and defending if any, whether to model damage and attrition or not, whether to model the passage of time during the combat, the ability to withdraw or flee (possibly including modelling of morale), how much randomness you want in the equation, and so on.


Once you know all this, there are several basic mathematical approaches you can take. You could do a round by round "chance to hit" system like many RPGs have, eg. the d20 combat system. You could do a one-round "attack vs defence" weighted coin toss system like the original Civilisation game does. You could have each side generate a score by adding attributes to a random number and whoever gets the highest value wins. And you can permute these systems to work on a round-by-round basis, or to deduct hit points or morale points, or whatever. Any system can work, but you have to balance it the way you want it to play. As ultimately the choice of how to model the combat is a key part of the game design, and is not something other people can just give to you.


spritebatch - How to use batch rendering with an entity component system?


I have an entity component system and a 2D rendering engine. Because I have a lot of repeating sprites (the entities are non-animated, the background is tile based) I would really like to use batch rendering to reduce calls to the drawing routine. What would be the best way to integrate this with an engtity system?


I thought about creating and populating the sprite batche every frame update, but that will probably be very slow. A better way would be to add a reference to an entity's quad to the sprite batch at initialization, but that would mean that the entity factory has to be aware of the Rendering System or that the sprite batch has to be a component of some Cache entity. One case violates encapsulation pretty heavily, while the other forces a non-game object entity in the entity system, which I am not sure I like a lot.


As for engine, I am using Love2D (Love2D website) and FEZ ( FEZ website) as entity system(so everything is in Lua). I am more interested in a generic pattern of how to properly implement that rather than a language/library specific solution.


Thanks in advance!



Answer




that would mean that the entity factory has to be aware of the Rendering System




If your entity factory function has to create objects for/from the rendering system, the obviously your entity factory function needs to create objects for/from the rendering system. There's no avoiding that; it's a part of what you need to do.


A component system is not supposed to eliminate dependencies. It's supposed to minimize them and focus them on the necessary dependencies, not the unimportant ones.


Users of your entity class shouldn't need to specifically know about the rendering system, unless they are dealing with the visual representation of an entity. Users of your entity creation functions may need to because, as code responsible for building entities, they need to know about all of the things that entities depend on. Thus, the dependency is focused on where it is needed.


Doing this doesn't violate encapsulation; the entity creation process needs to create rendering system objects (as well as possibly sound objects and other such things). It needs to know about these things in order to do its job.


To create an entity, the entity factory code must either be given some kind of rendering system object to create the visual representation from, or it must go out and find it. One of these things has to happen. You can create abstraction after abstraction, and it still will need to happen somewhere. If an entity needs a visual representation, then some piece of code somewhere is going to have to know about both entities and visual representations.


Tuesday, May 29, 2018

phrase meaning - There she is, there he is, there you are, and there it is



In a movie when a lady entered the room a man exclaimed: there she is! I think this means something like drawing people’s attention, kind of introducing her to the rest of public.


If we replace she with he certainly the meaning won’t change.


But if we use you the meaning changes completely according to Cambridge Dictionaries Online: used when giving something to someone or used to mean “I told you so”.


When I replaced the personal pronouns with it I was surprised to find nothing (to explain the phrase), on the Internet, but Vietnam’s references.


Would you please help me to understand the meaning of these phrases?



Answer




There [he/she/it] is!



is an exclamation that you have found something you were looking for. The speaker was probably wondering where she was, or at the least talking about her, when she appeared.



The reason it's not listed in the dictionaries is that, although it's a common phrase, it's not a "special" phrase - it's just a normal grammatical construction and its meaning is derived from the meaning of the constituent parts:


There (location) he (subject) is (verb to be) !


The emphasis is usually on there because you're emphasizing that you didn't know the location of the person (or object) and now you've found it: he/she/it is there.



There you are!



can also be used in this way. It means, "I was looking for you and now I've found you" or "I wondered where you are, but now you've appeared." You wouldn't say it in the middle of a conversation; you'd say it at the start.


However, there is another meaning of There you are!, which is the one you found. It's not more common, it's just that the phrase itself has a particular meaning beyond the definition of the individual elements, and so it's more necessary to define it.



Here I am!




would be the equivalent for the first person. (You can't say "There" because the here/there distinction is relative to the speaker.)


It has a similar meaning: you are announcing your arrival because you believe everyone was expecting you somehow - either looking for you, or waiting for you, or something similar.


perspective - An object twice as close appears twice as big?


So I was thinking about creating a 2D game where you can also move along the Z-axis, by changing in which layer you are. Depending on the depth I want to scale my 2D sprites.


Once, someone had shown me a demo in which he had a lot of 2d sprites, and by scrolling he could change the depth of the camera. So when zooming in, the objects would come closer to the player, and appear bigger. Then I wondered, how much bigger should an object be when it gets 1 unit closer. How would you calculate that? So the guy told me: There is one basic rule I am using: "objects twice as close, appear twice as big."


Now, by testing it myself, I know that rule doesn't apply in the real world ;) But is there some constant that is used in real world calculations for perspective or something? Or a formula?


I know this might not be the best place to ask such a question, but since this is the only site I use for game-related questions, and my context is a game, I thought I'd give it a try. Also, I am kind of expecting that there is this person here that knows everything about 3D perspectives and matrices or something, since it might relate to 3D games ;)


tl;dr:



"an object twice as close, appears twice as big" That is not true in the real world. But which constant or formula is correct?



Answer



Generally it is true, depending on your view point and in which direction it has moved, as well as the viewing angle.


Example of perspectives for objects


Note how in the first camera view, as the Red block is perpendicular to the camera view, the object seems to be twice as large in a perfect 1:2 ratio (Note the arrow pointing that it hits the edge of the view after being moved twice as close)


The second is the same size block rotated at 45 degrees. As it is rotated, the bottom edge is no longer at the same distance from the camera as the top edge, so it does not SEEM correctly scale to a 1:2 ratio but it in actual fact is twice as large (as it is at the same angle on the further away blue block as it is in the close blue block.)


In conclusion this actually means your friend was correct and a 1:1 ratio ("objects twice as close, appear twice as big.") for your objects is a good choice.


Monday, May 28, 2018

verb agreement - either + singular + 'have' or 'has'?


Is it legal to use has in the following question?




Would you tell me please whether either my friend Joey or me has to visit the manager tomorrow?



Or, instead, should I use have rather than has?




algorithm - How to remove floating terrain when generated with 3D Perlin Noise?



I'm currently using 3D Perlin Noise to generate random terrain in combination with Marching Cubes.


My issue seems to lie in scaling the noise function to get reasonable heights in my terrain. If only one octave is used, all points in relation to the noise are obviously interpolated which results in fairly smooth data. However, to get overhangs/caves/jagged terrain, I have to use multiple octaves with varying amplitudes and/or scale the resulting values to my 'max' height I want of the map.


I might be missing something, but by using marching cubes with 3D perlin noise, I regularly get floating bits of terrain.


Any ideas on how I could correct this or if there's other noise functions I might use to get results like I'm describing? I'm wanting hills/valleys/mountains/lakes/etc. No need for floating bits segregated from the rest of the map. I believe this is a similar issue to what Minecraft has when it has terrain that just floats around in the sky occasionally upon generation.


Thanks for any assistance, Mythics



Answer



Probably the only way to completely get rid of floating terrain is to test for connectivity. Depending on the size of you map, that might be an option. You can do that by picking base point, like the very bottom of your world. Then ensure all your voxels are either:



  1. Connected to the base.

  2. Connected to a voxel that's connected to the base.



Likely starting at a known connected point (like the bottom of your world) then doing a breadth first traversal of connected voxels would work pretty well. Make sure you're flagging voxels as connected while you go. You can do this per-chunk.


Additionally, there are less expensive ways to minimize, but not completely remove the floating terrain.




  • Gradient bias: You can bias the placement of solid voxels with a gradient. The gradient function would fall off somewhat sharply after you're above the "ground level". This article talks about using a gradient and has some other neat tips.




  • Height map noise: You can generate noise as a height map instead. This would remove the overhangs and caves, but you can simulate those with an additional subtractive noise layer.





  • Less noisy noise: Use fewer octaves in your terrain noise and generate subtractive noise to get overhangs and caves.




Does Unity for PC use Direct3D or OpenGL?



I am a mac developer using Unity and I hardly use a PC. When you build a Unity game for Windows, does it use Direct3D or OpenGL?


P.S. I'm not sure if it's called Direct3D or DirectX



Answer



Unity supports several renderers for its various platforms, Direct3D and OpenGL among them. You can find references to this fact in the release notes, for example, and in this documentation explaining some differences between renderer implementations that users should be aware of.


It appears that by default, Unity will use D3D on Windows. You can force it to use an OpenGL rendering path, apparently, via a command-line argument (although that thread is quite old). Configuring the rendering path in your game settings appears to be more about deferred versus forward renderers, and not the underlying API used.


path finding - Pseudo-code examples of A*?



I'm looking for pseudo-code examples of the A* pathfinding algorithm that actually works. I tried plenty of different ones where it's not really clear how to implement them at all times. Keep in mind that I'm a newbie, so if everything could be detailed, that'd be great.



Answer




I think the pseudocode in the Wikipedia article is sufficiently detailed. If you need more details and a working implementation then A* Pathfinding for Beginners is a good article. It has C++ sample code with comments.


opengl - glsl demo suggestions?



In a lot of places I interviewed recently, I have been asked many a times if I have worked with shaders. Even though, I have read and understand the pipeline, the answer to that question has been no. Recently, one of the places asked me if I can send them a sample of 'something' that is "visually polished".


So, I decided to take the plunge and wrote some simple shader in GLSL(with opengl).I now have a basic setup where I can use vbos with glsl shaders.


I have a very short window left to send something to them and I was wondering if someone with experience, could suggest an idea that is interesting enough to grab someone's attention.


Thanks




Sunday, May 27, 2018

prepositions - "What time...?" or "At what time...?" - what is more grammatically correct?


This question may sound silly, but it has been bugging me for years.


If I ask a question about a precise point in time, should I say "What time..." or "At what time..."?


For example,




At what time does Billy arrive at the swimming pool?



or



What time does Billy arrive at the swimming pool?




Answer



The initial preposition at in such contexts is entirely optional, but it usually wouldn't be included (although in reality we usually use when rather than [at] what time anyway :).


OP's specific example happens to include a "location-based" clause based on at [the swimming pool], but it might be worth looking at two slightly different contexts...




1a: What time does the shop open?
1b: At what time does the shop open?
1c: What time does the shop open at?



...and...



2a: Where did you come?
2b: From where did you come?
2c: Where did you come from?




In my opinion, both 'b' versions above are at least slightly stilted / awkward. But whereas 1a and 1c carry the same meaning, that's not the case with the second pair. 2a (without the preposition from) is effectively asking where you ended up, not where you started from (speaker might be asking your final position in a race, for example; Where did I come [in the marathon]? I did pretty good, actually - I came third, out of 2000 runners).


The point being that because there's no credible alternative meaning in the first pair that depends on whether the preposition is included or not, it's entirely a stylistic choice (and on average we don't bother with unnecessary words). But the second example shows that we do include the preposition where it's required to avoid ambiguity.


List of Anti-Cheat Packages for MMOs?



i would like to start this topic for programs to counter-attack link text.


I am not aware of a big list but with everyone's help i hope to make this a fine topic.



Posting format:


NAME: nProtect GameGuard


GAME TYPE: any type of game


Website: http://global.nprotect.com/product/gg.php


FEATURES?



  • Protect the users' PC from fast evolving hacking technologies and viruses.

  • Real-time protection, diagnose various patterns latest hacking tools that can harm games, but also memory debugging prevention technology will be applied, completely blocking hacking tools through real-time hacking tool monitoring and blocking function.

  • Blocks hacking attacks through memory scan

  • Blocks online outflow of personal information




Answer



NAME: xTrap


GAME TYPE: any type of game


WEBSITE: http://www.wiselogic.co.kr/product4.htm


FEATURES?


Unfortunately I can't read korean and I havent found much information regarding this one but I've seen quiet an amount of games using it.


Saturday, May 26, 2018

c# - How to create data driven effects/abilities for collectable card game


I'm trying to create cards for a ccg/tcg that are completely "data-driven" from an xml/json/external file. I've read multiple articles on this issue already, even here on stack exchange, but i can't figure out how to actually code this. I have a simple example:


My two cards have the following abilities



1) "Increase the value of all adjacent red cards by 1"


2) "Increase the value of this card by the amount of all adjacent red cards"


In my code i have the following functions (pseudo) that i need to accomplish these effects:


Increase(targetCards, byValue) // increases a card's value by another value
FindCards(filter=adjacent) // returns all adjacent cards
FilterCardsBy(filter=isRed) // returns only red cards
Count(targetCards) // counts the number of cards

To create effect 1 i would use the functions like this:


Increase(FilterCards(FindCards(adjacent), isRed), 1);


and to create effect 2 i would use:


Increase(this, Count(FilterCards(FindCards(adjacent), isRed)));

So here's the problem: I have no clue how i can describe these functions in my external file and also i have no clue how i actually tell my functions to work in this combination to achieve the desired effects. Maybe i'm misunderstanding the concept here but as far as i understood and described here i can create some pseudo script/code in my external file that then my code can interpret and execute on. But i feel i somehow missing the middle part where i write code to interpret these commands i'm defining in the external file.


Anyone with experience here that can give me some super simple starter ideas on how to takle this. Especially the part where i can combine several functions into one.




graphics - Mobile Game Development: Supporting Multiple Screen Resolution for iOS


Questions




  1. When designing the graphics of the game (e.g. for iPhone): What is the resolution I should base it upon? Is it the smaller resolution (iPhone4 screen resolution) so that I could scale it bigger to bigger resolution devices later on or the bigger resolution (iPhone6+) so that I could scale it smaller for smaller resolution devices later on? Which solution would be the best to implement that will not distort the graphics?




  2. I am creating a game now (endless runner type game) and I am using parallax backgrounds. What will be the best option in supporting multiple resolutions? Scale it so that it will look the same on every device but it might distort the image or reduces the viewing area of the game when scaled. Alternatively let the other parts of the background be seen on larger resolution devices making the graphics stay as it is but let users using larger resolution devices see objects that might not be seen on smaller devices. I am talking about the next part of the parallax background.





My question pertains to iPhone devices (iPhone 4/4s, iPhone 5, iPhone 5s, iPhone 5c, iPhone 6, iPhone 6 Plus) for now.



Answer





  1. The easiest solution would be to create the graphics at the size of the highest resolution, but older iPhones have little memory so this solution would only work (smoothly) on the newer models. Instead you should target 2 resolutions. 1920 x 1080 for the iPhone 6(+) and 1136 x 640 for the 5(S). In this way you will only have to downscale for the iPhone 6. This downscaling should not have a huge effect on performance since the hardware in the iPhone 6 is more than capable.




  2. Generally you would want the experience to be the same across all devices. The iPhone 5 and up all have a 16:9 aspect ratio. Unfortunately the iPhone 4(S) doesn't. For these you will either have to downscale the content a little further and have an empty space on the top/side or have some content falling off. The latter probably isn't the best idea so downscaling and having a little extra space on top seems like the best solution.





enter image description here


The whole image is the screen of the iPhone 5(S), cut off area 2 and you get the screen of the iPhone 4(S). Now, if you want to have the same content visible on the iPhone 5 and 4 then you will need to keep the same aspect ratio. The 5 its aspect ratio is 16:9. When we apply this to the iPhone 4 then you will get area 1. The grey areas will not have any content. You can fill this up with a nice background or display an UI.


countability - Is experience countable or uncountable?


"Seeing the Grand Canyon was certainly____(an /some) experience." Is experience countable or uncountable? Should I use some or an?



Answer



In some instances, it's countable and in other uncountable, depending on its meaning. According to the Longman dictionary if it means "something that happens to you or something you do especially when this has an effect on what you feel or think", it's countable.Seeing the grand Canyon, of course, has a great effect on everybody, so in this case it's countable and you should say: "Seeing the Grand Canyon was certainly an experience". More examples:



Ballooning is quite an experience.


I have a lot of experiences from trip to Africa.


Seeing the Grand Canyon was a really fantastic experience.


But if it means "knowledge or skill that you gain from doing a job or activity", it's uncountable.


Intransitive and prepositional verbs


To give you some context, I'm reading Michael Swan's 'Practical English Usage', and I'm at a section covering passive voice and verbs with prepositions in the passive.


The author states that "The object of prepositional verbs can become subjects in passive structures." He then gives the example of "Somebody has paid for your meal" transformed into "Your meal has been paid for."


My question is: In this case, is "pay" an intransitive verb of incomplete predication and "for your meal" the subjective complement, and Swan is instead analyzing it as "pay for" being a prepositional verb whose direct object is "your meal", are both analyses correct, or am I missing something?




Answer



I do not see how "meal" can possibly be a subject complement because the person paying is not a meal.


The analysis given by Swan strikes me as sensible: "pay for" is considered a transitive verb and can be put into the passive voice like any transitive verb even though it is a prepositional verb.


It is perhaps not a perfect example because you might make the argument that the verb "pay for" is not an independent verb and is instead an ellipsis



Someone paid for your meal



is an ellipsis for



Someone paid the amount due for your meal




If we interpret the first sentence as an abbreviated form of the second, it becomes even more obvious that "for your meal" is not refering to the person paying; instead it is a prepositional phrase modifying the object, whether that object is explicit or implicit.


Personally, I see no reason to analyze the sentence as an ellipsis. I accept "pay for" as a legitimate prepositional verb (probably developed through an historical process of ellipsis because it does not seem to be what is called in German a seperable verb).


And whether or not "pay for" is a legitimate prepositional verb, both sentences can be put into the passive voice.



Your meal was paid for



or



The amount due for your meal was paid for.




In any case, Swan's main point does not depend on this specific example. All transitive verbs, whether or not prepositional, can be put into the passive.


3d - How can people recognize what engine a game uses, based off its graphics?


With many games, you can say "oh, that's the Unreal Engine, for sure", or "this was made with an upgraded Rockstar Advanced Engine". We can often recognize the engine used for a game just by looking at its graphics, disregarding user interface.


Why is this? All game engines use the same 3D rendering technology that we all use, and the different games usually have a distinct art style. What's left to recognize?



Answer



Primarily I'd imagine this is down to the shaders. For example, the Unreal engine will have a certain method of handling HDR, a certain method of handling bump mapping, a certain method of handling light scatter, etc.


They will also have a uniform level of clarity in terms of constraints such as texture sizes and colour support.


Additionally the algorithms will be similar. Objects will be tessellated using the same algorithms. AI will make decisions according to the same decision-making architectures.


If the bump mapping is causing insane specular and reacts strongly to changes in lighting for example, you immediately start thinking Doom 3 engine. That's because that shader code is shared between every game using the engine. You wouldn't want to rip something like that out.


"All game engines use the same 3D rendering technology that we all use"


The technology is the same, but the rules that govern how the world actually looks (eg. lighting, tessellation, LOD , etc) are all written by the developer. The 3D rendering technology doesn't have that much to do with the visual quality of the things on screen. Even the rules for applying flat ambient lighting is left up to the developer (assuming you're not using Fixed-Function Pipelines).



You can make your OpenGL app look just like your DirectX app with often trivial difficulty. The underlying rendering technology really doesn't have that much impact except with regards to speed.


mathematics - Scale a normalized 2D vector always to the same length


For any normalized 2D vector, except for ( 0, 0 ), how would I scale the vector to always be the same length?


For example:



int length = 10;

vector v = vector( 0.1, 0.5 );
vector v2 = vector( 0.3, 0.8 );

// Scale v to be length of 10
// Scale v2 to be length of 10


Friday, May 25, 2018

level design - Heightmap, Voxel, Polygon (geometry) terrains


In relation to Heightmap, Voxel and Polygon (geometry) terrains:




  1. What are the main differences between all these three?

  2. Can you form a "smooth" terrain with Voxels, I mean, can you for example get a smooth mountain with Voxels, or Voxels are limited to cubes?

  3. Performance wise, a world 2000x2000 units, what would be faster Heightmap terrain, Voxel terrain or Polygon based, geometry terrain? (Assuming that there is "reasonable" performance gains/optimization done for rendering for every of possibilities)

  4. Are there any more techniques used for terrain creation?

  5. Any good titles representing each of types?


P.S. Polygon based terrain should be fully traingulated, no squareish stuff.



Answer





With a heightmap, you store only the height component for each vertex (usually as 2D texture) and provide position and resolution only once for the whole quad. The landscape geometry is generated each frame using the geometry shader or hardware tessellation. Heightmaps are the fastest way to store landscape data for collision detection.


Pros:




  • Relatively low memory usage: You only need to store one value per vertex and no indices. It's possible to improve this further by using detail maps or a noise filter to increase perceived detail.




  • Relatively fast: The geometry shader for heightmaps is small and runs fast. It's not as fast as geometry terrain though.
    On systems without triangle based 3D acceleration, ray marching heightmaps is the fastest way to render terrain. This was referred to as voxel graphics in older games.





  • Dynamic LOD/terrain: It's possible to change the resolution of the generated mesh based on distance from the camera. This will cause the shifting geometry if the resolution drops too far (around 0:40), but can be used for interesting effects.




  • Easy terrain generation/creation: Heightmaps can easily be created by blending noise functions like fractal Perlin Noise and heightmap editors are fast and easy to use. Both approaches can be combined. They are also easy to work with in an editor.




  • Efficient physics: A horizontal position maps directly to (usually) one to four positions in memory, so geometry lookups for physics are very fast.





Cons:




  • Exactly one height per x/y coordinate: There usually can't be holes in the ground or overhanging cliffs.




  • Less control: You can only control the precise height of each point if the grid size matches the texture coordinates.




  • Artifacts: If the four vertices that define a sub-quad aren't on the same plane, the split between the two vertices will become visible. This usually happens on steep cliffs with edges that don't follow a cardinal direction.





Heightmaps are the most efficient way of rendering terrain by far and are used in many newer games that don't rely on advanced terrain features and have large outdoor areas. Wikipedia has a list of programs that use heightmaps, but I'm not sure if that means they only use them as resources or also for rendering, so here a some games that are likely to use them:




  • Just Cause 2: Regions are loaded in square sectors and there are no holes in the terrain. In the demo, there's a deep hole with stretched triangles along the edges where there normally would be a building. (The area is normally inaccessible, but there are mods to remove some of the demo's limitations...)




  • Sims 2 (maybe): Neighborhood terrain is loaded as heightmap, but there are holes where lots (building sites) are placed. There are typical artifacts if you create cliffs on a lot, though, and it's quite tedious to add a cellar to a house (and hide the cliff under a veranda).





  • Valve's Source engine games: Rectangular brushes (static level geometry) can have heightmapped terrain on their faces. In these games, the usual quirks are often hidden with other brushes or props.




It's impossible to tell for sure without looking at the shaders because every heightmap terrain can be rendered as mesh.



Voxel terrain stores terrain data for each point in a 3D grid. This method always uses the most storage per meaningful surface detail, even if you use compression methods like sparse octrees.


(The term "voxel engine" was often used to describe a method of ray marching terrain heightmaps common in older 3D games. This section applies only to terrain stored as voxel data.)


Pros:





  • Continuous 3D data: Voxels are pretty much the only efficient way to store continuous data about hidden terrain features like ore veins.




  • Easy to modify: Uncompressed voxel data can be changed easily.




  • Advanced terrain features: It's possible to create overhangs. Tunnels are seamless.





  • Interesting terrain generation: Minecraft does this by overlaying noise functions and gradients with predefined terrain features (trees, dungeons). (Read Terrain Generation, Part 1 in Notch's blog for more info. There is no part 2 as of 05.8.2011.)




Cons:




  • Slow: To render voxel data, you either have to use a ray tracer or compute a mesh, for example with marching cubes (There will be artifacts). Neighboring voxel aren't independent for mesh generation and the shaders are more complicated and usually produce more complex geometry. Rendering voxel data with high LOD can be very slow.




  • Huge storage requirements: Storing voxel data uses lots of memory. It's often not practicable to load the voxel data into VRAM for this reason, as you'd have to use smaller textures to compensate for it, even on modern hardware.





It's not practical to use voxels for games that don't rely on voxel features like deformable terrain, but it can allow interesting game mechanics in some cases. Voxel engines are more common in older games, but there are also newer examples:




  • Atomontage engine: Voxel rendering.




  • Worms 4: Uses "poxels". According to Wikipedia it's a mix of voxels and polygons.





  • Minecraft: Uses voxel to represent the terrain in RAM, the graphics are polygon graphics. It's mostly software calculated though.




  • Terraria: An example for 2D voxels. I don't know how it renders.




  • Voxels combined with physics: Not a game. but it nicely showcases the destruction potential.





  • Voxatron: A game using voxels for almost all of the graphics, including menus and HUD.





Polygon meshes are the most flexible and precise way of storing and rendering terrain. They are often used in games where precise control or advanced terrain features are needed.


Pros:




  • Very fast: You only have to do the usual projection calculation in the vertex shader. A geometry shader isn't needed.





  • Very precise: All coordinates are store individually for each vertex, so it's possible to move them horizontally and increase mesh density in places with finer details.




  • Low memory impact: This also means the mesh will usually need less memory than a heighmap, because vertices can be more sparse in areas with less small features.
    (See Triangulated irregular network on Wikipedia).




  • No artifacts: The mesh is rendered as-is, so there won't be any glitches or strange-looking borders.





  • Advanced terrain features: It's possible to leave holes and create overhangs. Tunnels are seamless.




Cons:




  • Poor dynamic LOD: Only possible with precomputed meshes. This will cause "jumps" when switching without additional data to map old to new vertices.





  • Not easy to modify: Finding vertices that correspond to an area that should be modified is slow.




  • Not very efficient for collision detection: Unlike in heightmaps and voxel data, the memory address for a certain location usually can't be calculated directly. This means physics and game logic that depend on the exact surface geometry will most likely run slower than with the other storage formats.




Polygon terrain is often uses in games that don't have large open areas or can't use heightmap terrain because of its lack of precision and overhangs. I don't have a list, but I'm pretty sure that




  • every 3D Zelda and





  • every 3D Mario game




use this.



It's possible to create a terrain entirely in the shader pipeline. If the algorithm runs only in the fragment/pixel shader, the detail can be virtually unlimited while memory impact is almost zero. The obvious downsides are almost no control over the shape and problems when the camera intersects the original rendering surface. It's still useful in space games where players don't interact with the surface of a planet. Parameter animations work best with this kind of terrain.


It should be possible to download the generated terrain geometry from the graphics card to use it for the rest of the game engine, but I don't know how the performance of that is or whether this has been done so far.




There is no method that works well for every scenario, but it's fairly easy to choose one for a certain task:




  • Heightmaps are the best solution if you don't need overhangs or holes in the terrain surface and use physics or dynamic terrain. They are scalable and work well for most games.




  • Meshes have the highest precision and can describe overhangs, holes and tunnels. Use them if you have complex terrain that doesn't change often.




  • Voxels are good for describing very dynamic terrain with many complex features. Avoid rendering them directly as they need large amounts of memory and processing.





  • Other methods may be better than any of the above if you don't have to interact with the terrain or need very detailed graphics. They usually work only for very specific scenarios.




It's possible to combine different methods to get features from more than one, for example by tessellating mesh terrain with a heightmap to increase the detail structure of a cliff.


Dynamic terrain generation is heavily used in procedural space simulation and some have become really advanced in the last years. The forums of these projects should have some resources on the topic.


prepositions - "{Pay / Pay for} the expenses"


I have a question about the usage of "pay" and "pay for":





  1. He paid the expense.

  2. He paid for the expenses.



Could both be the same? This dictionary seems to say "pay" and "pay for" are the same.




vocabulary - Framing a question on "results" of a malarial infection ("Enlargement of the spleen and liver and blockage of capillaries in the brain")




What are the results of the infection of malaria?



In the question sentence, is the use of "the results" correct? Can I use "the effects" instead of "the results"? If the answer to the question is :



Enlargement of the spleen and liver and blockage of capillaries in the brain.



I would like to know how to ask the question.




The second or the third conditional?


(The situation: The Town Council is considering to demolish the old city park.)



Which of the following questions is "more correct"?



What would happened if this park was demolished?


What would have happened if this park had been demolished?



I am not sure.



Answer



Neither sentence is acceptable.


The first is grammatically unacceptable.




  • The modal auxiliary would must take an infinitive as its complement:

    What would happen ...





  • And in formal use the verb in the condition clause should take the irrealis form, since would in the then clause is irrealis:



    ... if the park were demolished.




    (However, was is acceptable in non-formal use.)




The second is semantically unacceptable in the circumstances you describe. The past perfect in the conditional clause and the irrealis modal past in the consequence clause mark this as a question about the past, not the future: you are asking about the past consequences of a past demolition, which would only be acceptable if you were indulging in historical speculation.


You have two choices:



a. What will happen if the park is demolished? or
b. What would happen if the park were demolished?



These both ask about the future consquences of a future demolition. The only difference between them is the speaker's attitude toward the demolition: in a. she thinks it quite possible that the park will be demolished and wants to know the likely consequences, whereas in b. she thinks it unlikely that that the park will be demolished but is curious about the hypothetical consequences.



meaning in context - What does "if not" mean in the given sentence


Let there be given this sentence (which came from an English-Chinese dictionary):



The contest has become personalised, if not bitter.




Then what does the phrase if not mean?


Seeking after is a general guide or rule of such usage.



Answer



Let's look at simpler example -



Try to finish at least 10 chapters from that book, if not all.



This means if all chapters are not possible, try to finish at least 10.




That smell (from a rotten thing) can cause nausea, if not vomiting



This means that that smell is likely to cause vomiting but if it does not, at least it can causes nausea. In other words, that smell is capable at least to cause nausea but it can also go closer to vomiting or in worst cases, it can cause vomiting.



[Part A of sentence,] if not [part B of sentence].



In such cases, the part B is expected or desired but then actually part A is likely to happen.



I'll pick you at 1900 hr, if not earlier.




This simply means the latest will be 1900 hr. The speaker wants to say that he'll try to pick the listener earlier but not later than 7 pm.


Another such example may be - I'm a good tennis player, if not a great one.


So, in your sentence, the contest did not turn bitter but at least got personalized.


A later edit (from J.R. and user42307's input): The sentence may also mean that the contest is on the verge of getting bitter (nausea's example) or has become bitter.


Workaround to losing the OpenGL context when Android pauses?


The Android documentation says:



There are situations where the EGL rendering context will be lost. This typically happens when device wakes up after going to sleep. When the EGL context is lost, all OpenGL resources (such as textures) that are associated with that context will be automatically deleted. In order to keep rendering correctly, a renderer must recreate any lost resources that it still needs. The onSurfaceCreated(GL10, EGLConfig) method is a convenient place to do this.




But having to reload all the textures in the OpenGL context is both a pain and hurts the game experience for the user when reentering the app after a pause. I know that "Angry Birds" somehow avoids this, I'm looking for suggestions on how to accomplish the same?


I'm working with the Android NDK r5 (CrystaX version.) I did find this possible hack to the problem but I'm trying to avoid building an entire custom SDK version.



Answer



Replica Island has a modified version of GLSurfaceView that deals with this issue (and works with earlier Android versions). According to Chris Pruett:



Basically, I hacked up the original GLSurfaceView to solve a very specific problem: I wanted to go to different Activities within my app without throwing all of my OpenGL state away. The major change was to separate the EGLSurface from the EGLContext, and to throw the former away onPause(), but preserve the latter until the context is explicitly lost. The default implementation of GLSurfaceView (which I didn't write, by the way), throws all GL state away when the activity is paused, and calls onSurfaceCreated() when it is resumed. That meant that, when a dialog box popped up in my game, closing it incurred a delay because all the textures had to be reloaded.


You should use the default GLSurfaceView. If you must have the same functionality that mine has, you can look at mine. But doing what I did exposed all sorts of awful driver bugs in some handsets (see the very long comments near the end of that file), and you can avoid all that mess by just using the default one.



Edit: I just realized you already posted the link to a similar hack. I don't think there is any built-in solution prior to honeycomb. Replica Island is a popular game working on many devices and you might find Chris's implementation and comments helpful.


Thursday, May 24, 2018

physics - Using Box2D for range detection?


I have two entities. One has a sword, one has a bow and arrow. When the bow entity is 100 units away, he needs to begin attacking. Likewise, when the sword entity is 10 units away, he needs to begin attacking.


My idea is to create an actual physical body for collision detection and a range body (or fixture?) for range detection. However, I don't want the range body to start pushing and affecting other entities. I simply want to detect when one entity's range body collides with another entity's physical body.



Is this the right thing to do and how can I do this with Box2D?



Answer



You can utilize Box2D sensors (or contact listener) for this. Setting one up is fairly simple, and a tutorial can be found here. The basic code (from the link):


var listener = new Box2D.Dynamics.b2ContactListener;
listener.BeginContact = function(contact) {
// console.log(contact.GetFixtureA().GetBody().GetUserData());
}
listener.EndContact = function(contact) {
// console.log(contact.GetFixtureA().GetBody().GetUserData());
}

this.world.SetContactListener(listener);

This listener is where you'll set up your response to your objects getting near each other.


You can read a little about them in the official Box2D manual, under section "6.3 Sensors".


phrase request - Does English have an expression for "Straw Enthusiasm"?


In Polish there's an expression Słomiany zapał which is a play on words, Straw enthusiasm and Straw going ablaze.


The idea is that straw burns with a very bright flame but the fire dies out very quickly. The fire is not sustaining and produces little heat in the long run. Following this metaphor, the expression describes a significant (and very common) vice of engaging in new projects with outstanding enthusiasm only to lose interest in them before they reach their fruition.


Is there any counterpart to this expression in English?



Answer



I'm not aware of a direct idiom for this vice. However, you might consider flash in the pan for once-off efforts, or just short-lived enthusiasm for more general usage.



It's a good expression though—I'll probably use it!


c# - How would I check the range against the entirety of an enemy object, and not just it's transform.position?


I want to be able to do range checks against the entirety of an enemy object, and not just it's transform.position. The enemy object can have it's side or nose within range, but range detection will not determine if it's close enough to be targeted unless it's transform.position is within range.


enter image description here


Right now I run a quick check against all nearby units to determine their distance from a turret on the friendly unit.


I was thinking raycasts, but I imagine raycasts every frame for up to 10 detected enemies per unit (Each unit will at max hold onto references of 10 nearby enemies. I could have 100 friendly units within range of 100 enemies) would chew up a fair bit more CPU than just checking their cached transforms.


Suggestions, ideas, or comments?


Edit: More Detail


Currently a Unit has a trigger spherical collider that defines it's view range. Any enemy unit within this view range is added to a collection based on priority, with a maximum of 10 objects referenced per collection (each Unit can only be aware of 10 enemy objects). From there, each turret on ship checks the distance from itself to each of the enemies within that collection. When the distance is <= it's attack range, it will fire upon whichever enemy object gets within it's attack range first.


I used to use spherical colliders for each turret, which of course would detect the enemy as soon as any part of the enemy entered the trigger collider. The issue was the massive performance loss due to OnTriggerStay() begin called by hundreds of units, tens of thousands of times per frame when I have no need for the method. So I discarded the extra trigger colliders for a more performance friendly approach.


The issue is that the range check is that is checks the transform.position of the enemy. This does not take into account the collider that makes up the enemies shape, and will only trigger when the center of the object enters within range.




Answer



Two dot products, using the vector between their centers, tells you which corner is closest.
A positive "Forward" dot indicates a front corner; negative is rear.
A positive "Right" dot indicates a right corner; negative is left.
If the distance between the selected corner and the friendly is less than Range, it is in-range.


Nearest-corner test:


enter image description here


This will detect collisions that aren't caught by the nearest-corner-test:
(Diagramming purple was a mistake since it would have already passed the nearest-corner-test. The calculation will work for all cases, including perpendicular, as shown in white.
For the perpendicular, (Width / 1.0) * 1.0 == Width.)



Re-use the dots from the nearest-corner-test to find the nearest "cardinal" enemy direction (the larger of the two). Calculate the hypotenuse length and, if (DistanceBetweenCenters - HYP) <= Radius, the enemy is within range.


enter image description here


Edited to include theory and attempted to make better use of color:


enter image description here


mathematics - How to make the player slide smoothly against terrain


diagram


I'm making an isometric game. When the player tries to walk diagonally into a wall I want them to slide smoothly across it, so whatever portion of the movement would be legal is used, and anything in the direction of the normal is thrown away. Walls can be any angle, not just vertical or horizontal, and the player has 360 motion.


I feel like I'm almost there but I can't put the last piece into place.


Update: great news everyone! I have it working. But... I'm a bit confused what I should be normalising and what not. The normal just needs to be a unit vector, right? but then I'm mixing that with my input so I'm normalising that - am I wrong?


By the way, I have also found that I need to push the player 1 pixel in the direction of the normal, so that they don't get stuck on things - works well.




Answer



Just project your vector of motion onto the plane normal and then subtract the result from your vector of motion.


Vector undesiredMotion = normal * (dotProduct(input, normal));
Vector desiredMotion = input - undesiredMotion;

Something like that anyway. Although in your lovely diagram the input seems to be away from the wall so I'm slightly confused.


android - Extrapolation breaks collision detection


Before applying extrapolation to my sprite's movement, my collision worked perfectly. However, after applying extrapolation to my sprite's movement (to smooth things out), the collision no longer works.


This is how things worked before extrapolation:


enter image description here


However, after I implement my extrapolation, the collision routine breaks. I am assuming this is because it is acting upon the new coordinate that has been produced by the extrapolation routine (which is situated in my render call ).


After I apply my extrapolation


enter image description here


How to correct this behaviour?



I've tried puting an extra collision check just after extrapolation - this does seem to clear up a lot of the problems but I've ruled this out because putting logic into my rendering is out of the question.


I've also tried making a copy of the spritesX position, extrapolating that and drawing using that rather than the original, thus leaving the original intact for the logic to pick up on - this seems a better option, but it still produces some weird effects when colliding with walls. I'm pretty sure this also isn't the correct way to deal with this.


I've found a couple of similar questions on here but the answers haven't helped me.


This is my extrapolation code:


public void onDrawFrame(GL10 gl) {


//Set/Re-set loop back to 0 to start counting again
loops=0;


while(System.currentTimeMillis() > nextGameTick && loops < maxFrameskip){

SceneManager.getInstance().getCurrentScene().updateLogic();
nextGameTick+=skipTicks;
timeCorrection += (1000d/ticksPerSecond) % 1;
nextGameTick+=timeCorrection;
timeCorrection %=1;
loops++;
tics++;


}

extrapolation = (float)(System.currentTimeMillis() + skipTicks - nextGameTick) / (float)skipTicks;

render(extrapolation);
}

Applying extrapolation


            render(float extrapolation){


//This example shows extrapolation for X axis only. Y position (spriteScreenY is assumed to be valid)
extrapolatedPosX = spriteGridX+(SpriteXVelocity*dt)*extrapolation;
spriteScreenPosX = extrapolationPosX * screenWidth;

drawSprite(spriteScreenX, spriteScreenY);


}

Edit



As I mentioned above, I have tried making a copy of the sprite's coordinates specifically to draw with.... this has it's own problems.


Firstly, regardless of the copying, when the sprite is moving, it's super-smooth, when it stops, it's wobbling slightly left/right - as it's still extrapolating it's position based on the time. Is this normal behavior and can we 'turn it off' when the sprite stops?


I've tried having flags for left / right and only extrapolating if either of these is enabled. I've also tried copying the last and current positions to see if there is any difference. However, as far as collision goes, these don't help.


If the user is pressing say, the right button and the sprite is moving right, when it hits a wall, if the user continues to hold the right button down, the sprite will keep animating to the right, while being stopped by the wall (therefore not actually moving), however because the right flag is still set and also because the collision routine is constantly moving the sprite out of the wall, it still appear to the code (not the player) that the sprite is still moving, and therefore extrapolation continues. So what the player would see, is the sprite 'static' (yes, it's animating, but it's not actually moving across the screen), and every now and then it shakes violently as the extrapolation attempts to do it's thing....... Hope this help




rendering - SDL2 Textures bleeding / 1px border around tile maps - SDL_RenderCopyEx taking integer arguments


https://forums.libsdl.org/viewtopic.php?t=9486 This post gives a good indication of my question.


Basically if you set SDL2 logicalScale or otherwise and render textures at native window resolution they appear fine. However, with tile maps if you resize the window in anyway, you get a bleed where an integer rounding issue creates a 1px border around certain tiles.


Is my only option to create a 1px border around all my images to stop this bleed / rounding error? Or a semi-transparent border with the main color. What are my options? Is this solved in any of the latest SDL2.X.Y ?


EDIT: A simpler method I have used is reducing my images from 64x64px to 62x62px in SDL2 (not the actual sprite) and using it's own sprite as a 1px border, and using Render Scaling to scale up that 1px, which stops bleed. It reduces the quality on background images ever so slightly, but it requires no tweaking of any code or sprites... but again wondering if there's a more elegant solution.



Answer



The option I went with is to use an SDL_Texture as a Render Buffer created with SDL_TEXTUREACCESS_TARGET (But still using a logical screen size in the renderer). The Renderer can copy any number of textures onto the buffer, then after all rendering is done, change render target to the window and copy the buffer to the window.


An example can be found here: https://gist.github.com/Twinklebear/8265888



Wednesday, May 23, 2018

In which situations is it OK to omit articles in short sentences?


Recently, in many new online software I often see that developers omit articles in short sentences, especially on action buttons or tooltips.


For instance: "Add new task" (not "Add a new task"), "Create project" (not "Create a project"), but often in the same software I can see "Make a copy". What are situations it can be justified?




Using present tense instead of past tense


In the text below



Suicides liked the bridge. The cop didn't think of that until he saw the man get out of the car, walk slowly along the footpath at the edge, and put a hand on a rail.



Why the writer didn't say


"Suicides liked the bridge. The cop didn't think of that until he saw the man got out of the car, walked slowly along the footpath at the edge, and put a hand on a rail."



Why the writer has used the present tenses instead of past tenses?



Answer



This is the correct form. The verbs get, walk, put aren't present-tense forms but unmarked infinitives (unmarked means they aren't 'marked' with to). That is one of the two verbforms which are permitted in clauses which act as complements to most verbs of perception, like see, hear, notice.


It is sometimes difficult to distinguish an unmarked infinitive from the plain present form, because there is only one verb in which they are different: be, whose plain present form is are.


But when, as in your example, the verb's subject calls for a 3d-person singular form, it is easy to tell the difference. If these were present forms, the sentence would read



... he saw the man gets out of the car, walks slowly along the footpath at the edge, and puts a hand on a rail.





The other one is the present participle; you could also write




... he saw the man getting out of the car, walking slowly along the footpath at the edge, and putting a hand on a rail.



But the participle is usually used to say what someone is doing at a particular moment, not a series of things done; so the infinitive is preferred here.


xna 4.0 - 2D Procedural Terrain Generation - Guaranteeing connectedness?


I'm working on a 2D platformer in XNA. One of things I'd like to be a main design characteristic is procedural content generation. The first step of that is to procedurally generate the terrain. So, I've done loads of research on how to generate Perlin noise, smooth it out, play with parameters and all that jazz. I've also spent loads of time at pcg.wikidot.com. The problem I have is generating a level that is 100% connected. Meaning, it's possible for the character to get from the left part of the level all the way to the right part.


Right now, I use the perlin noise to generate a sprite when the noise value at that point is < 0.


Here is a video of the issue I'm having (please excuse the crazy background issues and the fact my character gets stuck when I generate a new level).


http://screencast.com/t/uWJsIGLoih


As you can see in the video, the terrain looks interesting enough, but you can't do a whole lot because the Perlin noise compartmentalizes the level into small, random chambers.


I attempted to perform a random walk and overlay the perlin noise on top of it, so I guarantee a path from left to right. The issue with that is illustrated below: http://screencast.com/t/ilLvxdp3


So my question is: What kinds of things can I do to ensure the player can get from the left part of the level to the right part of the level?




Answer



In using the word connectedness, you've come within a hair's breadth of the tool best suited to determining a solution: graph theory.


Connectedness is a property of graphs. Graphs can be either connected or disconnected (as you're experiencing, AKA a multigraph). Any game level, in any number of dimensions, can be represented as a graph, and logically, this is often the best way to manipulate them. Your game world is a graph in terms of the adjacency between the individual building blocks in your world; and also at the level of connectivity between your various areas. You can use the former to derive the latter.


There is a crucial point to consider when doing working with (2D) levels as graphs, and that is planarity. Depending on your requirements, planarity may or may not be a concern. Given your use of noise, I expect the latter, however I outline the options here so that you know what they are.


Planar graph - simplest example is a labyrinth. A labyrinth differs from a maze in that it contains no branchings -- it is unicursal. If you were to take a solid block of shrubbery(!), and generate a labyrith through it, then at no point could a turning that the labyrinth takes, run into an existing path. It's a bit like a game of Snake, really -- if the path is the snake's body, it cannot be allowed to bite/intersect itself. Further, you could have a planar maze; this would branch, but at no point could the branches be allowed to intersect existing parts of the maze already generated, just as with a labyrinth.


Non-planar graph - Simplest example is a city street map. A city is essentially a maze. However, it is a highly-connected maze in that there are many individual road routes to get from one place to another. Moreover, a non-planar graph embedding allows crossings, which is exactly what intersections are. And as we know, a city is not a city without intersections. They are integral to traffic flow. In games, this can be good or bad, depending on your goals. Good level flow allows AI to act more easily, and exploration to be freer; while on the other hand it also allows a player to get from startpoint to goal quickly -- potentially too quickly.


This brings us to your approach, which is to use noise. Depending on Perlin noise output is interpreted, it can have some level of connectedness as the macro scale, but it is not designed for 1-connectedness (a single graph). This leaves you a few options.




  1. Drop the use of Perlin noise and instead generate a random, planar (non-crossing), connected graph. This provides maximum flow control. However this approach is non-trivial, because graph planarity requires the identification and removal of the Kuratowski subgraphs K3,3 and K5; as well as producing a subsequent planar embedding; both of which are NP-complete problems. This is without a doubt the hardest approach, but it had to be mentioned first to know where you stand. All other methods are a shortcut of some sort, around this method, which is the fundamental math behind maze generation.





  2. Drop the use of Perlin noise and instead generate a random, non-planar graph embedded within a planar surface (AKA a planar embedding) -- this how games like Diablo and the roguelikes can be made to work easily, as they both use a grid structure to subdivide a planar space (in fact, the vast majority of levels in the roguelikes DO allow crossings, evident in the number of four way intersections). Algorithms producing the connectivity between cells or template rooms are often called "carvers" or "tunnellers", because they carve empty space out of a block of solid rock, incrementally.




  3. Do as option (2), but avoid crossings. Thus both embedding (level geometry) is planar, and the topology (level flow) is also planar. You will have to be careful not to generate yourself into dead-ends, if you wish to avoid crossings.




  4. Generate your map using noise. Then, using a flood fill algorithm on every cell in your unconnected level (which is a graph, albeit multipart and grid-based), you can deduce all unconnected, discrete subgraphs within that greater multigraph. Next, consider how you want to connect each individual subgraph. If you prefer to avoid crossings, I suggest a sequential connection of these. If not, you can connect them any way you wish. In order to do this organically, rather than producing hard, straight passages, I would use some sort of coherence function to meld the closest points of each pair of subgraphs (if linking sequentially). This will make the join more "liquid", which is in keeping with the typical Perlin output. The other way you could join areas would be to nudge them closer together, so there is some minimal overlap of the empty spaces.





  5. Generate an excessively large map using noise. Isolate all subgraphs as described in option 3. Determine which is the most interesting, according to certain criteria (could be size, or something else, but size would be easiest). Pick out and use only that subgraph, which is already completely self-connected. The difficulty with this approach is that you may find it hard to control the size of your resultant graphs, unless you brute force generate a really large map, or many smaller ones, to pick your perfect subgraph. This is because the size of the subgraphs really depends on the Perlin parameters used, and how you interpret the result.




As an aside to the last two, something I'm sure you have already done, but just in case not: Create a minimal Perlin noise test case in Flash. Play around with parameters until you get a higher degree of connectivity between your "island" areas. I don't think this could ever solve your problem 100% across all generations, since Perlin noise has no inherent guarantee of connectedness. But it could improve connectedness.


Whatever you don't understand, ask and I will clarify.


Simple past, Present perfect Past perfect

Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...