Saturday, June 30, 2018

grammar - What is the difference between "were" and "have been"?


What is the difference between "were" and "have been", and are these sentences gramatically correct?


1) some of the best known writers of detective fiction in the twentieth century were women.
2) some of the best known writers of detective fiction in the twentieth century have been women.




When to use comma after time adverbs


I am trying to say sentences like:



Then by doing this thing, this happens


Now after selecting this option, this happens


By doing this, now we can do this



Should they get comma after "then" and "now":




Then, by doing this thing, this happens


Now, after selecting this option, this happens


By doing this, now, we can do this





responses - Answering question with yes or no


if someone asks me "Hasn't it been decided yet?" and if it hasn't been decided should I answer with:



Yes (it hasn't been decided yet.)



or



No (it hasn't been decided yet.)





software engineering - How can I implement gravity?


How can I implement gravity? Not for a particular language, just pseudocode...



Answer



As others have noted in the comments, the basic Euler integration method described in tenpn's answer suffers from a few problems:





  • Even for simple motion, like ballistic jumping under constant gravity, it introduces a systematic error.




  • The error depends on the timestep, meaning that changing the timestep changes object trajectories in a systematic way that may be noticed by players if the game uses a variable timestep. Even for games with a fixed physics timestep, changing the timestep during development can noticeably affect the game physics such as the distance that an object launched with a given force will fly, potentially breaking previously designed levels.




  • It doesn't conserve energy, even if the underlying physics should. In particular, objects that should oscillate steadily (e.g. pendulums, springs, orbiting planets, etc.) may steadily accumulate energy until the whole system blows apart.





Fortunately, it's not hard to replace Euler integration with something that is almost as simple, yet has none of these problems — specifically, a second-order symplectic integrator such as leapfrog integration or the closely related velocity Verlet method. In particular, where basic Euler integration updates the velocity and position as:



acceleration = force(time, position) / mass;
time += timestep;
position += timestep * velocity;
velocity += timestep * acceleration;

the velocity Verlet method does it like this:




acceleration = force(time, position) / mass;
time += timestep;
position += timestep * (velocity + timestep * acceleration / 2);
newAcceleration = force(time, position) / mass;
velocity += timestep * (acceleration + newAcceleration) / 2;

If you have multiple interacting objects, you should update all their positions before recalculating the forces and updating the velocities. The new acceleration(s) can then be saved and used to update the position(s) on the next timestep, reducing the number of calls to force() down to one (per object) per timestep, just like with the Euler method.


Also, if the acceleration is normally constant (like gravity during ballistic jumping), we can simplify the above to just:



time += timestep;

position += timestep * (velocity + timestep * acceleration / 2);
velocity += timestep * acceleration;

where the extra term in bold is the only change compared to basic Euler integration.


Compared to Euler integration, the velocity Verlet and leapfrog methods have several nice properties:




  • For constant acceleration, they give exact results (up to floating point roundoff errors, anyway), meaning that ballistic jump trajectories stay the same even if the timestep is changed.





  • They are second order integrators, meaning that, even with varying acceleration, the average integration error is only proportional to the square of the timestep. This can allow for larger timesteps without compromising accuracy.




  • They are symplectic, meaning that they conserve energy if the underlying physics do (at least as long as the timestep is constant). In particular, this means that you won't get things like planets spontaneously flying out of their orbits, or objects attached to each other with springs gradually wobbling more and more until the whole thing blows up.




Yet the velocity Verlet / leapfrog method are nearly as simple and fast as basic Euler integration, and certainly much simpler than alternatives like fourth-order Runge-Kutta integration (which, while generally a very nice integrator, lacks the symplectic property and requires four evaluations of the force() function per time step). Thus, I would strongly recommend them for anyone writing any sort of game physics code, even if it's as simple as jumping from one platform to another.




Edit: While the formal derivation of the velocity Verlet method is only valid when the forces are independent of the velocity, in practice you can use it just fine even with velocity-dependent forces such as fluid drag. For best results, you should use the initial acceleration value to estimate the new velocity for the second call to force(), like this:




acceleration = force(time, position, velocity) / mass;
time += timestep;
position += timestep * (velocity + timestep * acceleration / 2);
velocity += timestep * acceleration;
newAcceleration = force(time, position, velocity) / mass;
velocity += timestep * (newAcceleration - acceleration) / 2;

I'm not sure if this particular variant of the velocity Verlet method has a specific name, but I've tested it and it seems to work very well. It's not quite as accurate as fouth-order Runge-Kutta (as one would expect from a second-order method), but it's much better than Euler or naïve velocity Verlet without the intermediate velocity estimate, and it still retains the symplectic property of normal velocity Verlet for conservative, non-velocity-dependent forces.


Edit 2: A very similar algorithm is described e.g. by Groot & Warren (J. Chem. Phys. 1997), although, reading between the lines, it seems that they sacrificed some accuracy for extra speed by saving the newAcceleration value computed using the estimated velocity and reusing it as the acceleration for the next timestep. They also introduce a parameter 0 ≤ λ ≤ 1 which is multiplied with acceleration in the initial velocity estimate; for some reason, they recommend λ = 0.5, even though all my tests suggest that λ = 1 (which is effectively what I use above) works as well or better, with or without the acceleration reuse. Maybe it's got something to do with the fact that their forces include a stochastic Brownian motion component.


grammar - Why no verb in passive voice phrase


If "New account created" is passive. Why there not "account was/is created"? I have seen a lot of similar examples.




american english - Is "thru" for "through" acceptable? "Tho" for "though"?


I've been told that in American English, sometimes words ending in -ough are written -u: for example thru instead of through.


Is this correct English, or is it simply a common error?

If it is correct, what are the rules for this spelling?



Answer



"Thru" is correct (however very informal, not a very good idea, and only used when space is at a real premium — e.g. road signs, technical drawings) English, but -u is not a shortened way of -ough except in words that derivate from through (e.g. breakthrough).


From memory, I can recall although, enough etc. where -ough can't be replaced by -u (althu, enu etc.), since in those words -ough doesn't have a /u/ sound. (However, although can be shortened to altho, as noted in a comment — however, Wiktionary and other dictionaries register it, noting that it's quite informal.)


word usage - "When do I?" vs. "At what time should I?"



When do I have to come to the office tomorrow?



or




At what time should I come to the office tomorrow?



Are both of them correct? If so, do both of them mean the same? Which one can be used in a formal conversation?




Friday, June 29, 2018

singular vs plural - "6-foot tall" or "6-feet tall"?


I have heard/seen people say/write "She is 5 feet 10 inches tall" and "She is 5-foot-10." But in formal writing, is there a convention? I found both "8-foot-tall" and "nine-feet tall" in online sources.



The bronze, 8-foot-tall LBJ sculpture is slated to be installed at downtown's Little Tranquility Park, bound by Capitol, Walker, Bagby and Smith streets. (source)


Nine-feet tall and bronze, the monument to the famous novelist has been erected in the grassy center of the 13.7-acre park's circular drive. (source)




Answer



When a measurement is used right before the noun it measures, use a hyphen and the singular form of the unit of measurement:




I saw a 95-foot yacht in the harbor.
The 12-mile climb is too arduous for casual visitors.
The monument is in the 13.7-acre park's circular drive.



A dimension can also be included with another hyphen:



I saw a 95-foot-long yacht in the harbor.
The 8-foot-tall sculpture is impressive.
The flagpole is a 25-foot-tall, 3-inch-thick bamboo pipe.




However, when the measurement is used as a predicate, separate from the noun it measures, use the plural form of the unit of measurement. Don't use a hyphen:



The yacht I saw was 95 feet long.
The flagpole is 25 feet tall and 3 inches thick.
I can only finish a climb that's 4 miles or less.
Nine feet tall and bronze, the monument is popular among tourists.
She is 5 feet 10 inches tall.



Thursday, June 28, 2018

How were 8-bit and 16-bit games developed?



We have a lot of information on the internet out there on plenty of engines, SDKs, fancy IDEs, etc. But how did people manage to develop games in the past? Are there 'famous' tools? What was the most used programming language? how were they deployed into cartridges?




architecture - Should I use an SQL database to store data in a desktop game?




Developing a Game Engine


I am planning a computer game and its engine. There will be a 3 dimensional world with first person view and it will be single player for now. The programming language is C++ and it uses OpenGL.


Data Centered Design Decision


My design decision is to use a data centered architecture where there is a global event manager and a global data manager. There are many components like physics, input, sound, renderer, ai, ... Each component can trigger and listen to events. Moreover, each component can read, edit, create and remove data.


The question is about the data manager.


Whether to Use a Relational Database


Should I use a SQL Database, e.g. SQLite or MySQL, to store the game data? This contains virtually all game content like items, characters, inventories, ... Except of meshes and textures which are even more performance related, so I will keep them in memory.


Is a SQL database fast enough to use it for realtime reading and writing game informations, like the position of a moving character? I also need to care about cross-platform compatibility. Aside from keeping everything in memory, what alternatives do I have?


Advantages Would Be


The advantages of using a relational database like MySQL would be the data orientated structure which allows fast computation. I would not need objects for representing entities. I could easily query data of objects near the player needed for rendering. And I don't have to take care about data of objects far away. Moreover there would be no need for savegames since the hole game state is saved in the database. Last but not least, expanding the game to an online game would be relative easy because there already is a place where the hole game state is stored.




Answer



An SQL database is not nearly fast enough to use for realtime reading and writing game information. Such data is almost always kept in memory, in traditional data structures.


There may be some benefit to using an embedded database such as SQLite for certain types of data, eg. static data that doesn't change during gameplay but does change during development. This could then be deployed as part of the final game where SQLite is only really used when loading up the game for the first time, or when starting a new level, etc.


However there are many downsides too - it is hard to patch individual parts of the data when they're stored in a single database file, it is not ideal for many types of complex data that games need (and which you said you'd store outside - but will have references to and from things inside), it is not very flexible when you need to change the schema, it is not necessarily backwards compatible after you change the schema, etc.


For these reasons, most game developers will just use their own format. Professional developers who are performance conscious sometimes go one step further and save the in-memory data structure directly to disk so that it can be loaded in with a minimum of processing.


And if you really need text-based tabular data that is easily edited, you could use a simple text based format, such as CSV, XML, JSON, YAML, etc.


c# - Platform jumping problems with AABB collisions


See the diagram first:



When my AABB physics engine resolves an intersection, it does so by finding the axis where the penetration is smaller, then "push out" the entity on that axis.


Considering the "jumping moving left" example:




  • If velocityX is bigger than velocityY, AABB pushes the entity out on the Y axis, effectively stopping the jump (result: the player stops in mid-air).

  • If velocityX is smaller than velocitY (not shown in diagram), the program works as intended, because AABB pushes the entity out on the X axis.


How can I solve this problem?


Source code:


public void Update()
{
Position += Velocity;
Velocity += World.Gravity;


List toCheck = World.SpatialHash.GetNearbyItems(this);

for (int i = 0; i < toCheck.Count; i++)
{
SSSPBody body = toCheck[i];
body.Test.Color = Color.White;

if (body != this && body.Static)
{
float left = (body.CornerMin.X - CornerMax.X);

float right = (body.CornerMax.X - CornerMin.X);
float top = (body.CornerMin.Y - CornerMax.Y);
float bottom = (body.CornerMax.Y - CornerMin.Y);

if (SSSPUtils.AABBIsOverlapping(this, body))
{
body.Test.Color = Color.Yellow;

Vector2 overlapVector = SSSPUtils.AABBGetOverlapVector(left, right, top, bottom);


Position += overlapVector;
}

if (SSSPUtils.AABBIsCollidingTop(this, body))
{
if ((Position.X >= body.CornerMin.X && Position.X <= body.CornerMax.X) &&
(Position.Y + Height/2f == body.Position.Y - body.Height/2f))
{
body.Test.Color = Color.Red;
Velocity = new Vector2(Velocity.X, 0);


}
}
}
}
}



public static bool AABBIsOverlapping(SSSPBody mBody1, SSSPBody mBody2)
{

if(mBody1.CornerMax.X <= mBody2.CornerMin.X || mBody1.CornerMin.X >= mBody2.CornerMax.X)
return false;
if (mBody1.CornerMax.Y <= mBody2.CornerMin.Y || mBody1.CornerMin.Y >= mBody2.CornerMax.Y)
return false;

return true;
}
public static bool AABBIsColliding(SSSPBody mBody1, SSSPBody mBody2)
{
if (mBody1.CornerMax.X < mBody2.CornerMin.X || mBody1.CornerMin.X > mBody2.CornerMax.X)

return false;
if (mBody1.CornerMax.Y < mBody2.CornerMin.Y || mBody1.CornerMin.Y > mBody2.CornerMax.Y)
return false;

return true;
}
public static bool AABBIsCollidingTop(SSSPBody mBody1, SSSPBody mBody2)
{
if (mBody1.CornerMax.X < mBody2.CornerMin.X || mBody1.CornerMin.X > mBody2.CornerMax.X)
return false;

if (mBody1.CornerMax.Y < mBody2.CornerMin.Y || mBody1.CornerMin.Y > mBody2.CornerMax.Y)
return false;

if(mBody1.CornerMax.Y == mBody2.CornerMin.Y)
return true;

return false;
}
public static Vector2 AABBGetOverlapVector(float mLeft, float mRight, float mTop, float mBottom)
{

Vector2 result = new Vector2(0, 0);

if ((mLeft > 0 || mRight < 0) || (mTop > 0 || mBottom < 0))
return result;

if (Math.Abs(mLeft) < mRight)
result.X = mLeft;
else
result.X = mRight;


if (Math.Abs(mTop) < mBottom)
result.Y = mTop;
else
result.Y = mBottom;

if (Math.Abs(result.X) < Math.Abs(result.Y))
result.Y = 0;
else
result.X = 0;


return result;
}

Answer




I just looked at the code I have not tried to prove where it is wrong.


I looked at the code and these 2 lines seemed strange:


if ((Position.X >= body.CornerMin.X && Position.X <= body.CornerMax.X) &&
(Position.Y + Height/2f == body.Position.Y - body.Height/2f))

You check for interval and then You check for euquality? I may be wrong, (there might be som rounding going on) but it seems it might cause a trouble.



Wednesday, June 27, 2018

clauses - Can "for -ing" form be used after a noun to indicate the purpose of the noun?


Is the following sentence correct?




  • The most effective measure for stimulating the economy is reducing interest rates.



In this context, the "for -ing" clause means that




The purpose of the measure is to stimulate the economy





java - How can I mark levels as "complete" in a way that prevents cheating?


My first thoughts about marking levels complete is to just write the information to a file upon level completion and load that in once the app is opened up again. But how could I keep this safe from tampering and prevent cheating?




c# - How can I avoid tight script coupling in Unity?


Some time ago I started working with Unity and I still struggle with the issue of tightly coupled scripts. How can I structure my code to avoid this problem?


For example:



I want to have health and death systems in separate scripts. I also want to have different interchangeable walking scripts that allow me to change the way the player character is moved (physics-based, inertial controls like in Mario versus tight, twitchy controls like in Super Meat Boy). The Health script needs to hold a reference to the Death script, so that it can trigger the Die() method when the players health reaches 0. The Death script should hold some reference to the walking script used, to disable walking on death (I'm tired of zombies).


I would normally create interfaces, like IWalking, IHealth and IDeath, so that I can change these elements at a whim without breaking the rest of my code. I would have them set up by a separate script on the player object, say PlayerScriptDependancyInjector. Maybe that script would have public IWalking, IHealth and IDeath attributes, so that the dependencies can be set by the level designer from the inspector by dragging and dropping appropriate scripts.


That would allow me to simply add behaviors to game objects easily and not worry about hard-coded dependencies.


The problem in Unity


The problem is that in Unity I can't expose interfaces in the inspector, and if I write my own inspectors, the references wont get serialized, and it's a lot of unnecessary work. That's why I'm left with writing tightly-coupled code. My Death script exposes a reference to an InertiveWalking script. But if I decide I want the player character to control tightly, I cant just drag and drop the TightWalking script, I need to change the Death script. That sucks. I can deal with it, but my soul cries every time I do something like this.


Whats the preferred alternative to interfaces in Unity? How do I fix this problem? I found this, but it tells me what I already know, and it does not tell me how to do that in Unity! This too discusses what should be done, not how, it does not address the issue of tight coupling between scripts.


All in all I feel those are written for people who came to Unity with a game design background who just learn how to code, and there is very little resources on Unity for regular developers. Is there any standard way to structure your code in Unity, or do I have to figure out my own method?




Is there a specific word/phrase for a student living in the city of his uni/college during the week?


First of all, I am not sure if people actually do this in the UK, but in my country students who study at university often rent a room in the city and live there during the week. They go home on fridays and return on sunday evenings. We have a specific expression in Dutch for this, but there is no direct translation into English. You could say that the student goes and lives on his own or move out of their parents' house, but that does not really mean the same thing.


The reason for my asking is that I've had to formulate this several times already in writing tasks or just answers to English acquaintances. I have, however, never really found a way to say it in a short and concise way without having to explain what I meant.


So what do you say in English in a case like this? How would you say for example that your cousin is going to university next year and he will be living in the city during the week? Would you say that he is moving out, even though he is still under his parents' wings financially?



Answer



In the United States, one term is suitcase student, and an institution which has many such students may be known as a suitcase school. For example, consider this 2013 New York Times article about Central Connecticut State University. Connecticut is one of the smallest states, and CCSU is an institution sponsored by the state government, so the great majority of its students are from areas less than two hours away (which is extremely close by US standards), and so this pattern is prevalent.



Almost half of Central’s 7,700 full-time undergraduates live in dorms or near campus. But most vanish each Friday, joining the army of undergraduates at “suitcase schools” around the country who desert their campuses on weekends.


They head home for the same reasons suitcase students always have: favorite meals, moms (and now dads) still willing to do their laundry, high school friends and sweethearts, and jobs. The refrain “There’s nothing to do on campus” is self-fulfilling. …




Moore, Abigail Sullivan. "Off Off Off Campus" in The New York Times, Jan. 31, 2013


The article is full of other suitcase terms, including suitcase culture, suitcase mentality, suitcase legacy. It refers to a student for whom life at the university residence halls is temporary, not a true home; therefore, they pack a suitcase of clothes, as if going on a vacation somewhere.


I wouldn't consider this term to be commonly used, as most colleges and universities are either traditional residential institutions or commuter schools, where only a very few students or none at all live independently on or near the university campus. Surprisingly, I found no results for it in COCA. In Google Books, however, they go back to at least the mid-20th century, e.g.



One problem that is bothersome today is the "suitcase student," who leaves campus on Friday afternoon and returns on Monday morning. Treudley, Mary Bosworth. Prelude to the Future: The First Hundred Years of Hiram College. Association Press, 1950


Ole Miss remained a pleasant headquarters but scarcely a community of scholars. It was, as the expression went, "a suitcase school." Lord, Walter. The past that would not die. Harper & Row, 1965



3d - Resources of realistic water simulation?


I want to study water simulation, with a a demo with source code which using physically-based methods(Eulerian approaches or Lagrangian approaches).


How can I get some examples?




Tuesday, June 26, 2018

Does singular or plural verb goes with plural nouns like trousers, glasses, scissors, binoculars and many more?




  1. My glasses (was/were) lying on the table.

  2. My trousers (is/are) torn.


  3. (This/These) binoculars (was/were) gifted to me.



I know they're in plural form, but plural nouns such as rickets, measles are disease names, but they take singular verb.




meaning - What does "graphic" mean in "a graphic video"? Is it an official term?


Googling "graphic video" revelas that it is something like "video with violence", yet I can't find "graphic video" in a dictionary.


Description of "graphic" sometimes give a hint:




depicted in a realistic or vivid manner:
Example: graphic sex and violence.



, but the definition doesn't not imply any violence, only the example.


Is it appropriate to use the term "graphic video" just for a realistic or picturesque video? Isn't "graphic video" a tautology (video implies relating to graphics, like audio to sound)? Is it some euphemism?




How to decide the countability of 'performance'



Performance



  1. [uncountable, countable] how well or badly you do something; how well or badly something works


the country’s economic performance


He criticized the recent poor performance of the company.


Profits continue to grow, with strong performances in South America and the Far East.



Her academic performance has been inconsistent.



Some nouns can be classified as both uncountable and countable. The above is an example taken from the Oxford dictionary.


Without a plural marker or an indefinite article, sometimes it's a bit difficult for me to decide which category a noun like performance belongs with. I'm not sure if the categories could bring nuances to the intended meaning.


Would a native speaker automatically classify them into different noun categories, or never pay attention to such dichotomy?




networking - Network layer libraries



I'm looking for any network layers that are available to add to my game, either free or with fair pricing for indie games.


By network layers I mean some sort of library which I can interface with, that I will be able to send messages to and receive messages from, and it will handle all the low-level information by itself.


I'm especially looking for:



  • High quality libraries that understand and deal with complex things such as network congestion.

  • Scalable libraries, that will allow me to have a lot of players playing together.


  • Preferably a peer-to-peer solution, and not a server based one.

  • Preferably a library that has binding for high-level languages (such as Java or C#).


An example of what I'm looking for is Grapple, but I know there are other libraries available.




opengl - Generating geometry when using VBO


Currently I am working on a project in which I generate geometry based on the players movement. A glorified very long trail, composed of quads.


I am doing this by storing a STD::Vector, and removing the oldest verticies once enough exist, and then calling glDrawArrays.


I am interested in switching to a shader based model, usually examples I see the VBO is generated at start and then that's basically it. What is the best route to go about creating geometry in real time, using shader / VBO approach



Answer



Even though this can be application bound and depends on how much you generate geometry and how dynamic your application is, there are general rules you can follow when using VBOs.


-Specify how your VBOs will be used.



  • "Static" means the data in VBO will not be changed (specified once and used many times),



  • "Dynamic" means the data will be changed frequently (specified and used repeatedly)




  • "Stream" means the data will be changed every frame (specified once and used once). "Draw"




so in your case they should be dynamic.



  • Update (reuse) your existing VBOs using glBufferSubData, glMapBuffer(), glUnMapBuffer(), instead of creating and allocating new VBOs.



About using std::vector I think this can hurt performance if you add and remove a lot of objects, try to specify a vector with predefined size to minimize memory copy.


As for shaders, I don't think this makes any difference as long as you track transformation matrices (player position, rotation).


for more on VBOs refer to http://www.songho.ca/opengl/gl_vbo.html


xna - Rotate entity to match current velocity


I have an entity in 2D space moving around, lerping between waypoints. I would like to make the entity rotate around its own origin to face the current direction that it is going, I.E. towards the next waypoint in the list.


In case the waypoint movement code is needed, I'm using the following (this is contained inside of the entity that is following the waypoints) :


    private void GoToWaypoint() {
if (waypointList.Count > 0) {
Vector2 origin = helper.GetOrigin(position, width, height);
int nodeWidth = 50;
int nodeHeight = 50;


if (moveToPosition != waypointList[0]) {
moveToPosition = waypointList[0];
moveToPositionOrigin = helper.GetOrigin(moveToPosition, nodeWidth, nodeHeight);
distanceToPosition = moveToPositionOrigin - origin;
}

if ((Math.Ceiling(distanceToPosition.X) < 0)) {
position.X -= moveSpeed;
distanceToPosition.X = (int)Math.Round(distanceToPosition.X + moveSpeed);

}
else if ((Math.Floor(distanceToPosition.X) > 0)) {
position.X += moveSpeed;
distanceToPosition.X = (int)Math.Round(distanceToPosition.X - moveSpeed);
}

if (Math.Ceiling(distanceToPosition.Y) < 0) {
// Check if the entity moves below the top boundary
if (position.Y > 0) {
position.Y -= moveSpeed;

distanceToPosition.Y = (int)Math.Round(distanceToPosition.Y + moveSpeed);
}
else {
distanceToPosition.Y = 0;
}
}
else if (Math.Floor(distanceToPosition.Y) > 0) {
// Check if the entity moves below the bottom boundary
if (position.Y < 550) {
position.Y += moveSpeed;

distanceToPosition.Y = (int)Math.Round(distanceToPosition.Y - moveSpeed);
}
else {
distanceToPosition.Y = 0;
}
}

// If the entity gets in the waypoint, or the distance to the waypoint is 0 on both axis (which can occur due to screen boundaries)
if ((helper.InNode(position, moveToPosition, width, height, nodeWidth, nodeHeight)) || ((distanceToPosition.X == 0) && (distanceToPosition.Y == 0))) {
waypointList.RemoveAt(0);

}
}
}

I know this may have been asked elsewhere, and the bit of searching that I did on this type of problem, didn't really help me. I'm slightly math retarded, so I don't understand the explanations that don't really explain what the math is exactly achieving. So any help that breaks down the math and functions needed to achieve this would be greatly appreciated.


Thanks.



Answer



Step 1: Ensure that your sprites have a consistent default orientation. That is, all your sprites need to point in the same direction. This will make everything else vastly easier.


Step 2: Given two waypoints, you can compute the vector direction between them. Normalize this vector. This is the direction that you want your ship to face.


Step 3: The C-standard library function atan2 computes an angle (in radians) from a normalized vector. The vector (1, 0) will produce an angle of 0.



Therefore, you can compute the angle to rotate the sprite by calling atan2(dir.y, dir.x) (note that Y comes first. That's not a typo).


Step 4: In Step 1, we ensured that all sprites have a default orientation. atan2 has a default orientation of (1, 0). Therefore, we need to adjust the angle we got, so that providing a vector in the direction of the default produces a 0 angle. This adjustment is done by applying an offset to the value from atan2, depending on what the default orientation is (note: the following assumes that +Y is up and +X is right). The offset for each default orientation is:



  • right: 0

  • up: -pi/2

  • left: -pi

  • down: -3pi/2


Step 5: When you draw this sprite, use the appropriate XNA function to rotate the sprite at draw time. I'm sure XNA has a way to do that. You may have to convert the angle to degrees though.


Note that this computation also assumes that the XNA rotation function rotates counter-clockwise, such that positive angle values rotate it counter-clockwise.





Advice: You should not have a "moveSpeed" that is added to both X and Y. You should have a moveSpeed which is multiplied by the normalized direction to the destination waypoint. That vector (which is the velocity vector) is then added to the current position to get the new position.


sentence construction - I off day this afternoon



I off day this afternoon.



Is "off day" use correctly for stating I working on morning but off on afternoon?



Answer



The sentence is utterly ungrammatical.



There are many ways to say that.



I'm not in the office after the noon.
I'm here till afternoon, then I'm on leave
I'm on a half day leave, post lunch (this is specific though but works in most parts of India)



xna 4.0 - Getting a black scene with XNA, what am I doing wrong?


I'm new to XNA (ver 4) and I'm obviously doing something wrong but I don't know what.


So far I have managed to model my scene (quite simple, just a bunch of squares which are calculated--there are no models involved) correctly except for a big problem with the light. I'm using a BasicEffect to render things, as I lower the transparency level the scene also gets darker until it goes totally black when the transparency is zero. There's nothing black in the scene other than the background fill color (which doesn't show anywhere, anyway.)


Setting various depths of the zbuffer when clearing the screen has no effect on the result.


Each vertex has a color value that is being calculated and appears to be being displayed properly, I haven't tried to do anything with textures yet. All the polygons should be completely opaque.


Edit again: Fusspawn solved half my problem--my understanding of alpha was backwards. That still doesn't explain why I can see through things even when alpha is at 1.0, though.



Edit: I didn't add the code before because I think this is some misunderstanding of how things work rather than a simple bug. This code generates all the triangles where they are supposed to be, the only problem is lighting.


(Note: I've replaced the recursive hunt with a breadth-first one to get the cutoff distance right. No change to the vision problem.)


        private void PushBox(int x, int y, int z, int Distance)
{
if (World.MapData[x, y, z].Contents.Impassible()) return;
if (World.MapData[x, y, z].Scanned) return;
if (Distance >= MaxDistance) return;
World.MapData[x, y, z].Scanned = true;
// Left
if (World.MapData[x + 1, y, z].Contents.Impassible())

Walls.Square(Vertex(x + 1, y, z, WallColor), Vertex(x + 1, y + 1, z, WallColor), Vertex(x + 1, y + 1, z + 1, WallColor), Vertex(x + 1, y, z + 1, WallColor));
else
PushBox(x + 1, y, z, Distance + 1);
// Right
if (World.MapData[x - 1, y, z].Contents.Impassible())
Walls.Square(Vertex(x, y, z, WallColor), Vertex(x, y, z + 1, WallColor), Vertex(x, y + 1, z + 1, WallColor), Vertex(x, y + 1, z, WallColor));
else
PushBox(x - 1, y, z, Distance + 1);
// Behind
if (World.MapData[x, y - 1, z].Contents.Impassible())

//Distance = Distance;
This square is generated wrong (no such square was visible so I didn't catch it), it's now fixed. It isn't the problem.
Walls.Square(Vertex(x, y, z, WallColor), Vertex(x, y, z + 1, WallColor), Vertex(x + 1, y, z + 1, WallColor), Vertex(x + 1, y, z, WallColor));
else
PushBox(x, y - 1, z, Distance + 1);
// Front
if (World.MapData[x, y + 1, z].Contents.Impassible())
Walls.Square(Vertex(x, y + 1, z, WallColor), Vertex(x, y + 1, z + 1, WallColor), Vertex(x + 1, y + 1, z + 1, WallColor), Vertex(x + 1, y + 1, z, WallColor));
else
PushBox(x, y + 1, z, Distance + 1);

if (World.MapData[x, y, z - 1].Contents.Impassible())
Ceiling.Square(Vertex(x, y, z, CeilingColor), Vertex(x, y + 1, z, CeilingColor), Vertex(x + 1, y + 1, z, CeilingColor), Vertex(x + 1, y, z, CeilingColor));
else
PushBox(x, y, z - 1, Distance + 1);
if (World.MapData[x, y, z + 1].Contents.Impassible())
Floor.Square(Vertex(x, y, z + 1, FloorColor), Vertex(x + 1, y, z + 1, FloorColor), Vertex(x + 1, y + 1, z + 1, FloorColor), Vertex(x, y + 1, z + 1, FloorColor));
else
PushBox(x, y, z + 1, Distance + 1);
}


private VertexPositionColor Vertex(int x, int y, int z, Color Base)
{
float Distance = Math.Abs(State.xLocation - x) + Math.Abs(State.yLocation - y) + Math.Abs(State.zLocation - z);
float Scale = Math.Max((MaxDistance - Distance) / MaxDistance, 0);
Color ThisColor = Color.Multiply(Base, Scale);
//ThisColor = Color.White;
return new VertexPositionColor(new Vector3(x, y, z), ThisColor);
}

and the main draw routine:



        protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, MaxDepth, 0);

// TODO: Add your drawing code here

spriteBatch.Begin();
DrawStatusMessages();
DrawDungeon();
spriteBatch.End();

base.Draw(gameTime);
}

Answer



I might be missing something simple here. But shouldn't this be exactly what should happen?


For instance.


if i have a black piece of paper, and apply a completely transparent piece of paper on top of it. All I'm going to do is see through the transparency to the black?


Have you tried just rendering any old texture in the background first, and seeing if the screen turns to whatever colour the background texture is when turned transparent?


Monday, June 25, 2018

physics - 3D game on a planet



Would it be much more work to create a 3D game on a planet rather than on a flat plane? What engines would support this and what techniques would I use?


An example would be a small sphere the player could walk around to get back where they started.




Sunday, June 24, 2018

c# - How do I load Gleed 2D levels in WP7 XNA game?


I've been trying so long to use this modified version of GLEED2D that includes WP7 support,


I'm stuck into loading the level into my game, now my Game1.cs class has the following:


    protected override void Initialize()
{

Stream stream = TitleContainer.OpenStream("Content/leveltest.xml");
XElement xml = XElement.Load(stream);
level = LevelLoader.Load(xml);
base.Initialize();

}

I included the XML file of my level into Content Directory, when I try to build the solution, It gives this error:


XML is not in the XNA intermediate format. Missing XnaContent root element.

I didn't find any reference on how to fix it on the developer's blog or the github WiKi


Any Idea?



Answer



The problem is that you're trying to build the XML file using the XNA's Content Pipeline. However, in the source code you posted, the XML file is being loaded at runtime using the LINQ to XML API (XDocument) so it doesn't need to go through the content pipeline.


To solve the problem, go to the Solution Explorer, browse into your Content project to your XML file, and then on its Properties, change the Build Action property from Compile to Content, and also enable the Copy to Output Directory property. See picture below:



enter image description here


This way your XML file will still be copied to the Content folder, without being built by the XNA's Content Pipeline.


opengl - in the shadow of a sphere



(Related, but somewhat different, to my previous question)


How can I determine in a fragment shader if a fragment is in the shadow of a sphere?



That is, if it is occluded by the sphere and is past the sphere's horizon from the camera (if you are in front of the horizon you are not in the shadow even if you are in the sphere; the sphere is not solid)


In perspective, the horizon of the sphere is in front of the centre-point. Imagine holding a football at arms-length and stare at a point on the horizon of it; now move the football closer to you eye; what happens? It is no longer visible; the closer the sphere is to the eye, the less of the surface you can see:


enter image description here


As I imagine it, it is:




  1. are you in the cone that is from the camera and passes through the horizon of the sphere as seen from the eye? and




  2. are you past that horizon?





How do you compute the plane of the horizon, the cone, and how do you test for it in the fragment shader?




  1. Imagine you had a ray that was through camera and fragment. The nearest distance between that ray and the centre of the sphere being less than the sphere's radius would tell you if it was in the 'cone' of the sphere.




  2. Now imagine you knew the distance the camera to the horizon; if the closest point on the ray was less than this distance, its in front of the sphere; else its past the horizon. (We can make this simplification the fragments we want to test are never deep in the middle of the sphere.)





With these two values, you determine if a fragment is 'in the shadow' of the sphere.


But how do you compute this? What, even, is the coordinate of the camera (0,0,-1 if orthogonal projection, else 0,0,0?)? And how far away is the horizon of the sphere?


And what's the code for nearest point on ray to point? What I've come up with is [src]:


t = (P-B).(A-B) / (A-B).(A-B)

If P is the sphere's centre, and A is the fragment's position and B is the camera (at 0,0,0 so can be omitted as its a no-op):


// its a unit sphere:
var sphereCentre = mat4_vec3_multiply(
mat4_inverse(mat4_multiply(pMatrix,mvMatrix)),

[0,0,0]);
gl.uniform3fv(program.sphereCentre,sphereCentre);
gl.uniform1f(program.sphereRadius,1);

Then the vertex shader just has to pass the fragment position along:


precision mediump float;
attribute vec3 vertex;
uniform mat4 pMatrix, mvMatrix;
varying vec3 p;
void main() {

gl_Position = pMatrix * mvMatrix * vec4(vertex,1.0);
p = gl_Position.xyz/gl_Position.w;
}

And the fragment shader sees if its inside-the-cone using distance to sphere centre:


precision mediump float;
uniform vec4 fgColour, bgColour;
uniform vec3 sphereCentre;
uniform float sphereRadius; // always 1 in my game fwiw
varying vec3 p;

void main() {
float t = dot(sphereCentre,p) / dot(p,p); // where on line?
vec3 d = (p*t) - sphereCentre; // distance from nearest point to sphere
//### now we need to know if its in front of the horizon to force fgColour ???
gl_FragColor = (dot(d,d) <= sphereRadius*sphereRadius)? fgColour: bgColour;
}

This might be along the right track, but its not working (it looks hopeful drawn in ortho; in perspective it often draws things in the wrong colour). And how to compute the horizon?



Answer



To answer my own question:



The camera is at 0,0,0 in view space.


The sphere's centre has to be converted to view space and passed in as a uniform; this means multiplying it by the modelview matrix.


The vertex shader passes on the fragment's coordinate in view-space:


precision mediump float;
attribute vec3 vertex;
uniform mat4 pMatrix, mvMatrix;
varying vec4 pos;
void main() {
pos = (mvMatrix * vec4(vertex,1.0));
gl_Position = pMatrix * pos;

}

The fragment shader does the check:


precision mediump float;
uniform vec4 fgColour, bgColour;
uniform vec3 sphereCentre;
uniform float sphereRadius2; // always 1 in my game fwiw
varying vec4 pos;
void main() {
vec3 p = pos.xyz/pos.w;

float t = dot(sphereCentre,p)/dot(p,p);
vec3 d = (p*t)-sphereCentre;
gl_FragColor = (dot(d,d) > sphereRadius2 || t<=1.0)? fgColour: bgColour;
}

This computes where on the fragment's ray the nearest point is, as a ratio t.


It also computes the distance (squared - dot(d,d)) between the nearest point and the sphere's centre.


If the nearest point is beyond the sphere or the intersection of the line is less than 1 (meaning it is in front of the sphere) then it is in the foreground.


To compute point sprites correctly you need an extra check if it is in the background because the check outlined above is for the centre-point of the sprite and not each fragment in the sprite. If its possibly in the shadow of the sphere, you need to additionally check the exact fragment in the sprite.


sentence construction - You are better than he. You are better than him





You are better than he.


You are better than him.



Which one is correct? According to my views, Both are correct. "HE"is correct according to examiners. Otherwise, both are correct in conversation.



Answer



You are better than he.


You are better than him.


Both sentences are correct, without any difference in meaning. However, the former is very formal. Normally, you use the structure pronoun + verb after than such as you are better than he is.



The structure of the latter is used in informal English.


Where is "the" used? Is there a rule? Why no definite article in "Shops are open late in summer"?



Notice these sentences:



  1. Shops are open late in summer.

  2. Summer is a traditionally viewed as a slow season for games.

  3. At the summer solstice, the days are longest and the nights are shortest.

  4. it's time to start thinking about what 2016's Song of the Summer will be.


In English class, they told us that "the" is used for the things that are known. One example for this was a phrase like "The man who killed his wife". So when we are speaking of that man, we know what he did. Somehow we know something more than nothing about him. He's not just every man in the world.


So Now I'm confused about why "the" is not used in the first and second sentences. Can anyone please explain it for me?



Answer




"Summer", like the other season names, is often treated as a name (and sometimes written with a capital letter, like other names). Examples 1 and 2 do this.


Example 3 is not a counter-example: "summer" is there used as a modifier for "solstice", which is a common noun and requires an article.


Example 4 is a bit more complicated. "Song of Summer" would be possible, but would suggest Summers in general; "Song of the Summer", especially since a year is given, suggests that particular Summer.


Can I leverage the fact that my scene is often static to improve OpenGL (JOGL) performance?


My scene is drawn based on the location of several (often several million) vertices (kept in VBO's) and a camera. I can easily tell in my code when my scene has changed and when it hasn't. There are also some odd cases such as the window being resized, but I believe I can easily enumerate and handle those as well.


Can I (in user code or through some OpenGL property) leverage this to increase the performance when the scene is static? Clearly when the scene is changing, all of the math must be done to properly calculate what should be rendered. But when the scene is static, that picture isn't changing each frame.


I've tried implementing something in my code to do this, but the result is a flickering scene (and I'm not entirely sure why). Basically I check to see if anything has changed and if it hasn't I simply return from the display(GLAutoDrawable drawable) function that is invoked by the JOGL FPSAnimator.


I feel like this is probably a common problem that should have a standard solution. However, I haven't been able to find anything so far.



Answer




You can render the scene to a FBO when it changes, and render the FBO to the screen every frame. It's a little more effort than just enabling a property, but still quite straightforward to implement. This approach also allows you to render something on the background and foreground independent of the scene.


edit: Check this tutorial on using FBOs. You need to do the following initialization steps:



  1. Create and bind a Frame Buffer Object

  2. Create a Renderbuffer Object same size as the screen. Bind it to GL_DEPTH_COMPONENT. Renderbuffers are buffers that you are not using as a texture. In this case you need a Renderbuffer for depth buffer.

  3. Create a texture same size as the screen. Specify the format to include alpha channel, if you want to render something behind the scene also.

  4. Attach both the Renderbuffer object and the texture to the FBO (Attaching images to FBO)


When rendering the scene to the FBO:




  1. Bind the Frame Buffer Object

  2. Render the scene

  3. Bind 0 to restore normal screen rendering


When rendering the FBO to the screen:



  1. Bind the texture attached to the FBO

  2. Render a full-screen quad. Provide texture coordinates for each vertex spanning the whole texture range.


This should be enough for you. If you still need further help, I can try to provide working code sample when I have time.



Saturday, June 23, 2018

sprites - Algorithm for spreading labels in a visually appealing and intuitive way


Short version



Is there a design pattern for distributing vehicle labels in a non-overlapping fashion, placing them as close as possible to the vehicle they refer to? If not, is any of the method I suggest viable? How would you implement this yourself?



Extended version


In the game I'm writing I have a bird-eye vision of my airborne vehicles. I also have next to each of the vehicles a small label with key-data about the vehicle. This is an actual screenshot:


Two vehicles with their labels


Now, since the vehicles could be flying at different altitudes, their icons could overlap. However I would like to never have their labels overlapping (or a label from vehicle 'A' overlap the icon of vehicle 'B').


Currently, I can detect collisions between sprites and I simply push away the offending label in a direction opposite to the otherwise-overlapped sprite. This works in most situations, but when the airspace get crowded, the label can get pushed very far away from its vehicle, even if there was an alternate "smarter" alternative. For example I get:



  B - label
A -----------label
C - label

where it would be better (= label closer to the vehicle) to get:


          B - label
label - A
C - label

EDIT: It also has to be considered that beside the overlapping vehicles case, there might be other configurations in which vehicles'labels could overlap (the ASCII-art examples show for example three very close vehicles in which the label of A would overlap the icon of B and C).



I have two ideas on how to improve the present situation, but before spending time implementing them, I thought to turn to the community for advice (after all it seems like a "common enough problem" that a design pattern for it could exist).


For what it's worth, here's the two ideas I was thinking to:


Slot-isation of label space


In this scenario I would divide all the screen into "slots" for the labels. Then, each vehicle would always have its label placed in the closest empty one (empty = no other sprites at that location.


Spiralling search


From the location of the vehicle on the screen, I would try to place the label at increasing angles and then at increasing radiuses, until a non-overlapping location is found. Something down the line of:


try 0°, 10px
try 10°, 10px
try 20°, 10px
...

try 350°, 10px
try 0°, 20px
try 10°, 20px
...

Answer



After some thought, I finally decided to implement the spiralling search method I briefly described in the original question.


The rationale is that the Byte56's method needs special treatment for certain conditions, while the spiralling search doesn't, and it codes in a really compact way. Also, the spriralling search emphasise finding the closer spot to the vehicle to place the label, which IMO is the main factor in making the map readable.


However please continue to upvote his answer, as it's not only useful, it's also very well written!


Here's a screenshot of the result achieved with the spiralling code:


enter image description here



And here's the code which - although not self-contained - it gives an idea on how simple is the implementation:


def place_tags(self):
for tag in self.tags:
start_angle = tag.angle
while not tag.place() or is_colliding(tag): #See note n.1
tag.angle = (tag.angle + angle_step) % 360
if tag.angle == start_angle:
tag.radius += radius_step
tag.connector.update() #See note n.2


Note 1 - tag.place() returns True if the tag is entirely on the visible area of the screen/radar. So that line reads like "keep on looping if the tag is outside the radar or it overlaps something else..."


Note 2 - tag.connector.update is the method that draw the line connecting the aeroplane icon to the label/tag with the text information.


unity - Is it more efficient to add scripts to each GameObject or to a single parent object?


Does a C# script added to a parent object work more efficiently and effectively (in terms of both memory and performance) on all objects, in my case lighting, or adding script components to each lighting/GameObject object? This goes for other GameObjects parented in the hierarchy.



Answer



There is some amount of overhead to invoking each script's Update() and other methods, compared to maintaining a list of objects and updating them all at once from a single entry point.


So, if you maintain a list of multiple objects to update, you may see some runtime efficiency wins by iterating over them from within a single script instance. Just note that if you're searching for instances to update using FindObjectsOfType or GetComponentsInChildren, the search cost will likely outweigh the savings of not invoking each one's Update separately. So look for ways that you can persist the list across frames rather than re-building it continuously.


In practice, I haven't experienced noticeable slowdowns from having dozens and dozens of instances of simple scripts updating independently, so I recommend profiling first before adding complexity to your project - it may be premature optimization for many applications.


You should also weigh whether your developers may gain iteration efficiency by keeping the instances separate. This can make it easier to individually select & tweak a single instance or group of instances, or make use of prefabs and object references. These operations can get more complicated if the needed methods & data are tied up in a higher-level manager script.


Which pattern makes more sense for a particular feature will depend a lot on the specifics of what you're trying to accomplish, what workflows your team likes to use, and what your performance profiling looks like.


Friday, June 22, 2018

tiles - How to create a hexagon world map in PHP from a database for a browser based strategy game


I'm trying to create a hexagon world map for my PHP browser based strategy game. I've created a table in my database with the following data per row: id, type, x, y and occupied. Where type is the kind of tiles, which are defined in numbers. For example, 1 is grass. The map itself is 25 x 25.


I want to draw the map from the database with clickable tiles and the possibilty to navigate through the map with arrows. I don't really have a clue on how to start with this and any help would be appreciated.




Answer



*Edit: Fixed error in javascript that caused error on firefox *


Edit: just added ability to scale hexes to the PHP source code. Tiny 1/2 sized ones or 2x jumbo, it's all up to you :)


I wasn't quite sure how to put this all into writing, but found it was easier to just write the code for a full live example. The page (link and source below) dynamically generates a hexmap with PHP and uses Javascript to handle map clicks. Clicking on a hex highlights the hex.


The map is randomly generated, but you should be able to use your own code instead to populate the map. It is represented by a simple 2d array, with each array element holding the type of terrain present in that hex.


Click me to try the Hex Map Example


To use, click on any hex to highlight it.


Right now it's generating a 10x10 map, but you can change the map size in the PHP to be any size you want. I'm also using a set of tiles from the game Wesnoth for the example. They are 72x72 pixels in height, but the source also lets you set the size of your hex tiles.


The hexes are represented by PNG images with "outside the hex" areas set as transparent. To position each hex, I am using CSS to set each tile's absolute position, calculated by the hex grid coordinate. The map is enclosed in a single DIV, which should make it easier for you to modify the example.


Here is the full page code. You can also download the demo source (including all hex images).



// ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
// :: HEX.PHP
// ::
// :: Author:
// :: Tim Holt, tim.m.holt@gmail.com
// :: Description:
// :: Generates a random hex map from a set of terrain types, then
// :: outputs HTML to display the map. Also outputs Javascript
// :: to handle mouse clicks on the map. When a mouse click is

// :: detected, the hex cell clicked is determined, and then the
// :: cell is highlighted.
// :: Usage Restrictions:
// :: Available for any use.
// :: Notes:
// :: Some content (where noted) copied and/or derived from other
// :: sources.
// :: Images used in this example are from the game Wesnoth.
// ::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::


// --- Turn up error reporting in PHP
error_reporting(E_ERROR | E_WARNING | E_PARSE | E_NOTICE);

// --- Define some constants
$MAP_WIDTH = 10;
$MAP_HEIGHT = 10;
$HEX_HEIGHT = 72;

// --- Use this to scale the hexes smaller or larger than the actual graphics
$HEX_SCALED_HEIGHT = $HEX_HEIGHT * 1.0;

$HEX_SIDE = $HEX_SCALED_HEIGHT / 2;
?>


Hex Map Demo






// ----------------------------------------------------------------------
// --- This is a list of possible terrain types and the
// --- image to use to render the hex.
// ----------------------------------------------------------------------
$terrain_images = array("grass" => "grass-r1.png",

"dirt" => "dirt.png",
"water" => "coast.png",
"path" => "stone-path.png",
"swamp" => "water-tile.png",
"desert" => "desert.png",
"oasis" => "desert-oasis-tile.png",
"forest" => "forested-mixed-summer-hills-tile.png",
"hills" => "hills-variation3.png",
"mountain" => "mountain-tile.png");


// ==================================================================

function generate_map_data() {
// -------------------------------------------------------------
// --- Fill the $map array with values identifying the terrain
// --- type in each hex. This example simply randomizes the
// --- contents of each hex. Your code could actually load the
// --- values from a file or from a database.
// -------------------------------------------------------------
global $MAP_WIDTH, $MAP_HEIGHT;

global $map, $terrain_images;
for ($x=0; $x<$MAP_WIDTH; $x++) {
for ($y=0; $y<$MAP_HEIGHT; $y++) {
// --- Randomly choose a terrain type from the terrain
// --- images array and assign to this coordinate.
$map[$x][$y] = array_rand($terrain_images);
}
}
}


// ==================================================================

function render_map_to_html() {
// -------------------------------------------------------------
// --- This function renders the map to HTML. It uses the $map
// --- array to determine what is in each hex, and the
// --- $terrain_images array to determine what type of image to
// --- draw in each cell.
// -------------------------------------------------------------
global $MAP_WIDTH, $MAP_HEIGHT;

global $HEX_HEIGHT, $HEX_SCALED_HEIGHT, $HEX_SIDE;
global $map, $terrain_images;

// -------------------------------------------------------------
// --- Draw each hex in the map
// -------------------------------------------------------------
for ($x=0; $x<$MAP_WIDTH; $x++) {
for ($y=0; $y<$MAP_HEIGHT; $y++) {
// --- Terrain type in this hex
$terrain = $map[$x][$y];


// --- Image to draw
$img = $terrain_images[$terrain];

// --- Coordinates to place hex on the screen
$tx = $x * $HEX_SIDE * 1.5;
$ty = $y * $HEX_SCALED_HEIGHT + ($x % 2) * $HEX_SCALED_HEIGHT / 2;

// --- Style values to position hex image in the right location
$style = sprintf("left:%dpx;top:%dpx", $tx, $ty);


// --- Output the image tag for this hex
print "$terrain\n";
}
}
}

// -----------------------------------------------------------------
// --- Generate the map data
// -----------------------------------------------------------------

generate_map_data();
?>

Hex Map Example


View page source
Download source and all images










reset ($terrain_images);
while (list($type, $img) = each($terrain_images)) {
print "
$type
$type
";
}

?>



Here is a screenshot of the example...


Hex Map Example Screenshot


Definitely could use some improvements. I noticed in a previous comment you said you were familiar with jQuery, which is good. I didn't use it here to keep things simple, but it would be pretty useful to use.


c# - Unity - Random Spawned Enemy Is Not Moving


I want to make an enemy spawner with random rotation left and right in unity. Its an 2d platformer. The random rotation is working, but the enemy is not moving after it's spawned.


I put the rigidbody of the enemy in the value enemiesrb in the script EnemySpawn and gave it an .velocity = new Vector.


I have 2 scripts




EnemyPatrol:



public float moveSpeed;
public bool moveRight;

public Transform wallCheck;
public float wallCheckRadius;
public LayerMask whatIsWall;
private bool hitWall;


public float lockPos = 0;

// Use this for initialization
void Start () {

}

// Update is called once per frame
void Update() {


hitWall = Physics2D.OverlapCircle(wallCheck.position, wallCheckRadius, whatIsWall);


if (hitWall)
{
moveRight = !moveRight;
}



if (moveRight)
{
transform.localScale = new Vector3(-1f, 1f, 1f);
//GetComponent().velocity = new Vector2(moveSpeed, GetComponent().velocity.y);
}
else
{
transform.localScale = new Vector3(1f, 1f, 1f);
//GetComponent().velocity = new Vector2(-moveSpeed, GetComponent().velocity.y);
}



and EnemySpawn



public GameObject enemies;
private Rigidbody2D enemiesrb;

public float spawnTime;
public int maxEnemies;
public int lofr;


private int amount;

public float moveSpeed;
void Start()
{
enemiesrb = enemies.GetComponent();
}



void Update()
{
//enemies = GameObject.FindGameObjectsWithTag("Enemy");
//amount = enemies.Length;

if (amount != maxEnemies)
InvokeRepeating("spawnEnemy", spawnTime , spawnTime);

}


void spawnEnemy()
{
lofr = Mathf.Abs(Random.Range(0, 1));

if (lofr == 1)
{
Instantiate(enemies , transform.position, Quaternion.Euler(1,180,1));
enemiesrb.velocity = new Vector2(moveSpeed, GetComponent().velocity.y);
}
else if(lofr == 0)

{
Instantiate(enemies , transform.position, Quaternion.Euler(1,1,1));
enemiesrb.velocity = new Vector2(-moveSpeed, GetComponent().velocity.y);
}
CancelInvoke();
}

the lofr means left or right. it defined if it need to spawn left or right




floating point - Unity, positioning with ints instead of floats


I round my transform.position to the nearest integer at the end of every frame so it's never a decimal number. I would like to know if there's a way to make unity store the position in ints instead of floats to prevent floating points precision problems.



Answer



No, you can't do that because Unity is doing all its internal calculation in float.


But if you would like to implement your game mechanics completely with integer arithmetics, you could write your own Vector3Int class, use it for all of your game mechanics, and then have each of your game objects write its Vector3Int-position to transform.position in LateUpdate().


How can I decide when to use “for” + “-ing” or “to” + [infinitive] in a sentence?


Example:



I have nothing to do for now.





Nothing for doing for now.



Another one:



I have some things to study.




I have some things for studying.



New Examples: (QUESTION UPDATED)




I don't know if you use XXX, but I was wondering if it could be useful for sharing knowledge among us.




I don't know if you use XXX, but I was wondering if it could be useful to share knowledge among us.



Another one:



So, we could create a group to share any kind of technology knowledge.





So, we could create a group for sharing any kind of technology knowledge.




Answer



This is a very good explanation of the distinction; I have included it reformatted below:




We use for + the -ing form of a verb to talk about the function of something or how something is used:



I need something for storing CDs.

The PC is still the most popular tool for developing software systems.




We use for + the -ing form of a verb to refer to the reason for something:



You should talk to Jane about it. You know, she’s famous for being a good listener. (A lot of people know she’s such a good listener.)




Warning: We don’t use for + -ing to express our purpose or intention. We use to + infinitive:


We’re going to Lisbon to visit my aunt.

NOT: We’re going to Lisbon for visiting my aunt. or … for visit my aunt.
He’s now studying to be a doctor.
NOT: He’s now studying for to be a doctor. or … for being a doctor.
There’ll be sandwiches to eat and juice to drink.
NOT: There’ll be sandwiches for eat and juice for drink.



I am, however, inclined to disagree with the phrasing of the warning a little, and would phrase it as



Warning: We don't use for + -ing to express an aim or intention. We use to + infinitive.




"Purpose" is a poor choice of words to use in this warning. As you might notice in the first example that this passage offers,



I need something for storing CD's.



is a perfectly fine phrase, and does imply the purpose of needing said "something". Here, you can also substitute to because there is an associated aim/intention:



I need something to store CD's.



In the second example, however:




The PC is still the most popular tool for developing software systems.



a substitution with to develop doesn't work because the PC is not the agent which develops software systems, but the agent used to develop software systems. As such, an appropriate rephrasing would be:



The PC is still the most popular tool used to develop software systems.



And because this remains a statement of purpose, you can also say:



The PC is still the most popular tool used for developing software systems.




I should note that I now rescind my recommendation of the for the purpose of substitution test that I initially suggested in comments on the original post; although that was what I immediately thought of, it is in no way a comprehensive test nor definitive. For instance, it fails in the "for being" example and the "for visiting" examples.


unity - How to send an interface message?


I come from Unreal, still using it. There, how it works is like this:
Let's say you as a player have a laser gun or something, and when shooting, you use raycast to see if you've shot anything, and if yes, you send an interface message (let's say Damage() function of IDamagable interface) to the hit result, if any.


Now, if the hit result implements that interface and has a Damage() function, it gets damaged. (And of course, if it doesn't implement the interface, nothing happens.)


But in Unity, I don't know how to send that interface message to the hit result (saying "if you have IDamagable implemented, call your Damage() function), but if it doesn't have it, that's fine either.


I've watched many videos on interfaces in Unity, but I still don't know how to achieve the same as in Unreal.




Thursday, June 21, 2018

articles - Pls explain what exactly this "Priority Seat" represents


I would like to understand what exactly this "Priority Seat" represents.


Priority Seat notice



Image credit: Southern Rail, UK


For a long time, I was thinking "Priority Seat" is the name of an adjacent priority seat so that "A" is not required. Just like the name of your iPhones is "iPhone" not "an iPhone," But someone told me that "Priority Seat" is not a proper noun. The words are capitalized because it's a title.


Then I came up with a new explanation, that is, this "Priority Seat" is simply an abbreviated form of "A Priority Seat"; "A" is omitted because it is a sign. But when I imagine a sign that reads "A Priority Seat" in this context (adjacent to a priority seat) I feel so incorrect. The text wouldn't work to create the right nuance. So I had to drop this idea as well.


Could anyone explain what kind of grammatical structure is used for this "Priority Seat" so that I could clearly understand what it represents?



Answer



You should envision the sign saying,



"This is a Priority Seat"



but because it's a sign, it's shortened to just "Priority Seat."

It's the same with signs that say, "Restroom" or "Exit" or "Lobby" or "Parking Garage."


This works because signs like these are labels and labels say what the things they label are.


adjectives - The grammar of 'Ready to take' versus 'Easy to take'


Why is this ungrammatical?:




  1. * The medicine is easy to be taken.




when we can say:




  1. The medicine is ready to be taken.



What is the difference between "ready" and "easy" that makes the one statement grammatical and the other ungrammatical?




grammaticality - provides information "on", "of" or "about" somthing?


Which is grammatical: "it provides information on something", or, "it provides information of something", or, "it provides information about something"? Or if all are grammatical, which one is used depending on the context? Are there other prepositions possible, e.g. "in"?



Answer



First, let's take care of that pesky of.



The documents contain information of great importance.


The intercepted information was of little merit.



This doesn't speak about the subject, the actual content of the information but about the information itself: 'of questionable value', 'of no interest to me', 'of utmost urgency'. This is a rather formal, official form. Normally you'd say "important information" or "urgent information", but the of form is a well-accepted formal phrasing.



You might try to use it to indicate owner of the information, but that's really awkward. "The disk contains information of Sony on their newest mp3 player" - but I don't think you'd ever encounter it in real life. "From" or "By" will be much more natural.


Now, the subtle difference between "on" and "about". They are practically identical, with only subtle differences in rare cases. While "on" will be always information directly "on" the subject - the direct data like name, own properties, things relating directly, "about" can relate indirectly.



I have new (or, a new piece of) information about Mary: Her boyfriend was yesterday at her flat at 8PM and there was no one there, lights off, door locked, no car.



You wouldn't say information on Mary in the above example. That's indirect information, a hint, something that tells us she wasn't there then, but doesn't tell us anything directly. It sheds some light but it doesn't relate to her directly. Still, in a great many cases you can use the two interchangeably.


There's one more case when you use strictly on: Dirt. Tools of blackmail. Proofs against given person in an investigation. Compromising information.



Finally, we got some compromising information on Fisher. He called a drug dealer yesterday, and we have the call recording implying he wants to buy some drugs.




As for others...


"in"/"at" - standard locations, where the information was found. "on" can be used that way too - "I found it on the Internet!"


There's one more word that often goes with information: regarding. This is the formal counterpart to on/about, which goes in pair with of and is about the content of the information.



Information of utmost importance regarding safety of the president.



html5 - Javascript Isometric draw optimization


I'm having trouble with isometric tiles drawing.


At the moment I got an array with the tiles i want to draw. And it all works fine until i increase the size of the array.


Since I draw ALL tiles on the map it really affects the game performance (obviously) :D.


My problem is I'm no genius when it comes to javascript and I haven't managed to just draw what is in viewport.


Should be fairly simple for an expert though because its fixed sizes etc. Canvas is 960x480 pixels, each tile 64x32. This gives 16 tiles on first row, 15 on the next etc. for a total of 16 rows.


Tile 0,0 is in the top-right corner. And draws X up to down and Y right to left. Going through the tiles on the first row from left to right as +X -Y.


Here is the relevant part of my drawMap()


function drawMap(){

var tileW = 64; // Tile Width
var tileH = 32; // Tile Height
var mapX = 960-32;
var mapY = -16;
for(i=0;i for(j=0;j var drawTile = map[i][j];
var drawObj = objectMap[i][j];
var xpos = (i-j)*tileH + mapX;
var ypos = (i+j)*tileH/2 + mapY; // Place the tiles isometric.

ctx.drawImage(tileImg[drawTile],xpos,ypos);
if(drawObj){
ctx.drawImage(objectImg[drawObj-1],xpos,ypos-(objectImg[drawObj- 1]));
}

}
}
}

Could anyone please help me how to translate this to just draw the relevant tiles? It would be deeply appreciated.





c++ - What's the largest "relative" level I can make using float?


Just like it was demonstrated with games like dungeon siege and KSP, a large enough level will start to have glitches because of how floating point works. You can't add 1e-20 to 1e20 without losing accuracy.


If I choose to limit the size of my level, how do I calculate the minimum speed my object can move at until it begins to be choppy ?



Answer




A 32-bit float has a 23 bit mantissa.


That means each number is represented as 1.xxx xxx xxx xxx xxx xxx xxx xx times some power of 2, where each x is a binary digit, either 0 or 1. (With the exception of extremely small denormalized numbers less than \$2^{-126}\$ - they start with 0. instead of 1., but I'll ignore them for what follows)


So in the range from \$2^i\$ and \$2^{(i+1)}\$, you can represent any number within an accuracy of \$\pm 2^{(i - 24)}\$


As an example, for \$i = 0\$, the smallest number in this range is \$(2^0) \cdot 1 = 1\$. The next smallest number is \$(2^0) \cdot (1 + 2^{-23})\$. If you wanted to represent \$1 + 2^{-24}\$, you'll have to round up or down, for an error of \$2^{-24}\$ either way.


In this range:                You get accuracy within:
-----------------------------------------------
0.25 - 0.5 2^-26 = 1.490 116 119 384 77 E-08
0.5 - 1 2^-25 = 2.980 232 238 769 53 E-08
1 - 2 2^-24 = 5.960 464 477 539 06 E-08
2 - 4 2^-23 = 1.192 092 895 507 81 E-07

4 - 8 2^-22 = 2.384 185 791 015 62 E-07
8 - 16 2^-21 = 4.768 371 582 031 25 E-07
16 - 32 2^-20 = 9.536 743 164 062 5 E-07
32 - 64 2^-19 = 1.907 348 632 812 5 E-06
64 - 128 2^-18 = 0.000 003 814 697 265 625
128 - 256 2^-17 = 0.000 007 629 394 531 25
256 - 512 2^-16 = 0.000 015 258 789 062 5
512 - 1 024 2^-15 = 0.000 030 517 578 125
1 024 - 2 048 2^-14 = 0.000 061 035 156 25
2 048 - 4 096 2^-13 = 0.000 122 070 312 5

4 096 - 8 192 2^-12 = 0.000 244 140 625
8 192 - 16 384 2^-11 = 0.000 488 281 25
16 384 - 32 768 2^-10 = 0.000 976 562 5
32 768 - 65 536 2^-9 = 0.001 953 125
65 536 - 131 072 2^-8 = 0.003 906 25
131 072 - 262 144 2^-7 = 0.007 812 5
262 144 - 524 288 2^-6 = 0.015 625
524 288 - 1 048 576 2^-5 = 0.031 25
1 048 576 - 2 097 152 2^-4 = 0.062 5
2 097 152 - 4 194 304 2^-3 = 0.125

4 194 304 - 8 388 608 2^-2 = 0.25
8 388 608 - 16 777 216 2^-1 = 0.5
16 777 216 - 33 554 432 2^0 = 1

So if your units are metres, you'll lose millimetre precision around the 16 484 - 32 768 band (about 16-33 km from the origin).


It's commonly believed you can work around this by using a different base unit, but that's not really true, since it's relative precision that matters.




  • If we use centimetres as our unit, we lose millimetre precision at the 1 048 576-2 097 152 band (10-21 km from the origin)





  • If we use hectametres as our unit, we lose millimetre precision at the 128-256 band (13-26 km from the origin)




...so changing the unit over four orders of magnitude still ends up with a loss of millimetre precision somewhere in the range of tens of kilometers. All we're shifting is where exactly in that band it hits (due to the mismatch between base-10 and base-2 numbering) not drastically extending our playable area.


Exactly how much inaccuracy your game can tolerate will depend on details of your gameplay, physics simulation, entity size/draw distances, rendering resolution, etc. so it's tricky to set an exact cutoff. It may be your rendering looks fine 50 km from the origin, but your bullets are teleporting through walls, or a sensitive gameplay script goes haywire. Or you may find the game plays fine, but everything has a barely-perceptible vibration from inaccuracies in the camera transform.


If you know the level of accuracy you need (say, a span of 0.01 units maps to about 1 px at your typical viewing/interaction distance, and any smaller offset is invisible), you can use the table above to find where you lose that accuracy, and step back a few orders of magnitude for safety in case of lossy operations.


But if you're thinking about huge distances at all, it may be better to sidestep all of this by recentering your world as the player moves around. You choose a conservatively small square or cube-shaped region around the origin. Whenever the player moves outside this region, translate them, and everything in the world, back by half the width of this region, keeping the player inside. Since everything moves together, your player won't see a change. Inaccuracies can still happen in distant parts of the world, but they're generally much less noticeable there than happening right under your feet, and you're guaranteed to always have high precision available near the player.


Simple past, Present perfect Past perfect

Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...