Friday, March 31, 2017

xna - Do I require a game engine when I want to make a game?




I started making a 2D game in C# and realized that I could also use a game engine or XNA. For the moment everything works fine but I'm afraid I will have problems in the future.


So do I actually need a game engine or XNA? Or will it also works without.


PS: I don't know anything of XNA so I would learn it.



Answer



Game engines are not required! However, since you seem to be new to game development, game engines are highly recommended. XNA will help you by abstracting away a large amount of complex tasks into easy ones. Think of the engine like a set of tools and code that covers all the "standard" game development tasks, so you don't have to re-write something that needs to be written for every game.


Obligatory game/engine advice: Make games, not engines.


unity - How to animate an arc from the middle in both directions


I have been able to make an arc that animates from one end to another using the following coroutine:


 IEnumerator AnimateArc(float duration)
{
float waitDur = duration / pts;


for (int i = 0; i <= pts; i++)
{

float x = center.x + radius * Mathf.Cos(ang * Mathf.Deg2Rad);
float y = center.y + radius * Mathf.Sin(ang * Mathf.Deg2Rad);
arcLineRend.positionCount = i + 1;

arcLineRend.SetPosition(i, new Vector2(x, y));

ang += (float)totalAngle / pts;


yield return new WaitForSeconds(waitDur);

}
}

How can I animate this arc from the middle in both directions



  1. at a constant speed

  2. with an ease (animates slightly faster in the middle and slows down towards the end)





perfect constructions - Tense when saying "This is the first time" you've been somewhere


I'm very confused about tenses. I have examples in two different situations here.



Situation 1: I went to New York two months ago and was talking to someone about it.



A. This is the first time I've been to New York.


B. That was the first time I'd been to New York.



Which is correct in a daily conversation?




Situation 2: I'm going to New York next month and talking to someone.



A. This will be the first time I've been to New York.



B. This is the first time I've been to New York.


C. This will be the first time I go to New York.



Which sentence is appropriate?




According to a grammar article I've read before, I think it says that I can use this sentence in both cases:



This is the first time I've been to New York.



But I'm not sure if this is really so. So I'd like to know which tense I should use in each case. Could you explain it?




Answer



In situation 1, which version you select (they're both grammatically correct) depends on the tense in which you're telling the story. If you're telling the story in the present tense ("So I'm walking along the street..."), you would use option A to match, while if you're telling it in the past tense ("So I was walking along the street..."), you'd use option B.


In situation 2, you'll generally want either A or C.


The essence of the grammar article was probably that in English, it's possible to discuss either past events or hypothetical future events in the present tense, as if you were placing yourself in the time and narrating from that perspective. In this case, you'd use the present-tense "first time I've been", but it's much more common to narrate this way when relating past events than when talking about future plans.


Thursday, March 30, 2017

How to analyze this sentence’s logic: “If it rains, I'll take an umbrella.”



A person was asked to analyze the following sentence, but couldn't answer even after some searching. They did not understand that this was a logic puzzle.



If it rains, I'll take an umbrella.



How would one analyze the truth table of the logic of this sentence?



Answer



This is a classic example used in logic. See Google Search: "if p then q" rains umbrella



If it rains, (then) I'll take an umbrella.




If p then q.
p = it rains
q = I'll take an umbrella.


Statement is true or false accordingly:



  • True: It rains and I take my umbrella.

  • False: It rains and I don't take my umbrella.

  • True: It doesn't rain and I take my umbrella.

  • True: It doesn't rain and I don't take my umbrella.



Note the abbreviated rule:



  • True: It doesn't rain. (It doesn't matter if I take my umbrella.)


Note the equivalent statement: "I take my umbrella OR it doesn't rain." (Non-exclusive "or")




Also note the alternative logic of Murphey's Law: A corollary of Murphy's Law says, "If I don't take my umbrella, it'll rain." (Credit to @J.R.)


libgdx - How to use moveTo Actor?


How to use moveTo?


I do so:


actor.addAction(Actions.moveTo(500, 500, 10));

but he does not move


Thanks you



UPDATE


does not move


my code:


public class MyActions extends ApplicationAdapter {
private Stage stage;
private Actor actor;

@Override
public void create () {


stage = new Stage();
actor = new Actor(){
{
setSize(100, 100);
}
Sprite actorSprite = new Sprite(new Texture(Gdx.files.internal("badlogic.jpg")), 57, 10, (int)getWidth(), (int)getHeight());

@Override
public void draw(Batch batch, float parentAlpha) {
actorSprite.draw(batch, parentAlpha);

}
};

stage.addActor(actor);
actor.addAction(Actions.moveTo(500, 500, 1));
}


@Override
public void render () {

Gdx.gl.glClearColor(1, 1, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
actor.act(Gdx.graphics.getDeltaTime());
//stage.act();
stage.draw();
}
}

what's wrong?




word usage - What is the difference between effect and affect?



I am confused about these two words how they differ in usage. Would be easy if explained with example.




Answer



There are lots of examples of usage for each sense of affect (OALD definition) and effect (OALD definition) at their dictionary definitions.


However, an interesting case appears with the following sentences.



The UN affects changes in our foreign policy.


The UN effects changes in our foreign policy.



Both sentences are valid, but have a different meaning. In the first, the meaning is that the UN influences planned changes in our foreign policy. In the second, the UN initiates changes in our foreign policy.


The word "joint" as a synonym for place?


I'd like to know in what context can the word "joint" be used as a synonym for place.


Is it used for places that sell products and/or services?



Answer



It's a very common and informal way to refer to a business, usually one that sells food or drink. "I run a burger joint in Anchorage." "Lets meet at that joint on the corner of 5th and Main." Usually it will refer to a small, casual restaurant or bar.


libgdx - How to animate abstract 2d top down water texture?



I currently implement a game with a top down view of the ocean. I use the following, a little abstract texture: enter image description here


The actual texture is transparent, I added the green-like color for clarity.


The problem I now have is, that I don't know how to animate this texture so the water looks nice. I tried to move the texture with a sin wave: texture.y += sin(angle). Of course now the whole texture is moving which looks kind of unrealistic. The next thing I tried is to add another layer and implement a parallax effect. So that reflections under the water surface would also move, but much slower. It looks a little better but still not... nice enough.


I think the best looking animation would be, if the individual cells would expand and contract, kind of like a web or piece of cloth. Imagine if someone would slightly pull at one vertex of these cells and the neighbored cell would expand and the cell I pull towards to (or push to) would contract. Kind of like a web of springs(?). But I have no clue how to implement something like this:




  • Whats the math-model for this? Something with springs, where forces push/ pull?

  • And if so, how do I map this model to the given texture? Keeping all the curves and what not...


(I'm also open to different ideas/answers on how to animate the given texture. Realism is not the point here, just some nice looking water like movements...)



I posted a libgdx example in this post: 2d water animation is jagged and not smooth (see answer about texture filtering)



Answer



A common way this is done is using an indirect texture lookup in the shader to distort the display texture:


Animated gif showing water animation


Here I'm using a texture with some low-frequency colour noise (tiling smooth blobs of random colours), and scrolling it across the display geometry over time.



enter image description here


Instead of drawing the colours from this texture, I instead take the red & green channels and subtract 0.5f to turn them into a pseudorandom 2D vector that changes smoothly over time & space.


I can then add a small multiple of this vector to my UV coordinates, before sampling from the main water texture. This shifts the part of the texture we're reading & displaying, warping it around.


By averaging two samples from this noise, scrolling in opposite directions, we can hide the direction of movement so it just looks like aimless sloshing.


In Unity the shader would look like this - it should be simple enough to translate to the shader language of your choice:


fixed4 frag (v2f i) : SV_Target
{
float2 waveUV = i.uv * _NoiseScale;
float2 travel = _NoiseScrollVelocity * _Time.x;


float2 uv = i.uv;
uv += _Distortion * (tex2D(_Noise, waveUV + travel).rg - 0.5f);
waveUV += 0.2f; // Force an offset between the two samples.
uv += _Distortion * (tex2D(_Noise, waveUV - travel).rg - 0.5f);

// Sample the main texture from the distorted UV coordinates.
fixed4 col = tex2D(_MainTex, uv);

return col;
}

jobs - Essential skills to be a games programmer



I recently posted a question about the kinds of things to expect in game programming tests for games companies as I found myself unprepared for some of the topics. There are two things I can say about myself, and I think these probably apply to a lot of people. 1. I went and did a course at university that, although got me interested in programming, was poorly taught and didn't prepare me to really get any sort of decent job.. and 2. I'm still completely determined to get into the games industry.


I'm currently working at a little start-up, but I know that if I want to make the games I like playing I have to keep cramming!


My question is this: Can some of you professionals / graduates from better courses enlighten me as to the things I need to learn to become a really solid candidate for programming jobs.



I'm talking about things like Big O Notation, calculating complex recursion, linked lists , matrix / quaternion maths etc.


The first in that list I was not taught, and the second I didn't even know was something I might ever be expected to do. A list of all these and more, plus a good place to start learning about them would be extremely useful, I think to others as well as myself.


(re-tag and re-word this question as appropriate)



Answer



Alright, I actually just graduated with a Bachelor's degree in Computer Science this December.


I'm going to list what I feel were the important classes and the textbooks we used in them. Other than these and some software engineering courses, the rest of my classes were more focused on subfields like databases, AI, web development, and ethical hacking.


Data Structures


http://www.amazon.com/Objects-Abstraction-Data-Structures-Design/dp/0471467553


This class is essentially an introduction to different types of data structures along with their performance characteristics. The class I took was focused around C++ so it discussed STL data structures such as vectors (dynamic arrays), link list, trees, maps, sets, stacks, queues, and hash tables. Touches upon some algorithm analysis stuff too but really only Big O notation for worst case performance scenarios.


Algorithm Analysis



http://www.amazon.com/Foundations-Algorithms-Fourth-Richard-Neapolitan/dp/0763782505/ref=sr_1_fkmr0_1?s=books&ie=UTF8&qid=1326899174&sr=1-1-fkmr0


Explores many different types of algorithms and how to measure their performance characteristics. Some types of algorithms include Divide & conquer, dynamic programming, greedy approaches, backtracking, and branch and bounding.


Distributed Computing


http://www.amazon.com/Distributed-Computing-Applications-M-L-Liu/dp/0201796449/ref=sr_1_1?s=books&ie=UTF8&qid=1326899724&sr=1-1


Discusses networking, interprocess communication, UDP and TCP sockets, and a bunch of distributed computing paradigms (Client-Server, P2P, distributed objects, etc)


Computer Architecture


http://www.amazon.com/Structured-Computer-Organization-Andrew-Tanenbaum/dp/0131485210/ref=sr_1_1?s=books&ie=UTF8&qid=1326899561&sr=1-1


Self explanatory. Explains how all the hardware works and collaborates together.


Program Language Concepts


http://www.amazon.com/Concepts-Programming-Languages-Robert-Sebesta/dp/0136073476/ref=sr_1_1?s=books&ie=UTF8&qid=1326899395&sr=1-1



Covers different aspects and characteristics of different languages. Syntax, semantics, variable scope and binding, data types, expressions and assignmnet statements, control structures, etc.


Operating Systems


http://www.amazon.com/Operating-System-Concepts-Abraham-Silberschatz/dp/0470128720/ref=sr_1_1?s=books&ie=UTF8&qid=1326898989&sr=1-1


Covers from top to bottom all the essential task and concepts behind a operating system. This includes handling processes and threads, scheduling and synchronizing processes, memory (virtual, paging, segmentation, etc), file systems, and IO systems.


Wednesday, March 29, 2017

algorithm - Spell casting - How to optimize damage per second


Imagine we have a wizard that knows a few spells. Each spell has 3 attributes: Damage, cool down time, and a cast time. Pretty standard RPG stuff.



Cooldown time: the amount of time (t) it takes before being able to cast that spell again. A spell goes on "cooldown" the moment it begins casting.


Cast time: the amount of time (t) it takes to use a spell. While the wizard is casting something another spell cannot be cast and it cannot be canceled.


The question is: How would you maximize damage given different sets of spells?


It is easy to calculate the highest damage per cast time. But what about in situations where it is better to wait then to get "stuck" casting a low damage spell when a much higher one is available?


For example,




  1. Fireball: 3000 damage, 3 second cast time, 6 second cool down.





  2. Frostbolt: 20 damage, 4 second cast time, 4 second cool down.




  3. Fireblast: 3 damage, 3 second cast time, 3 second cool down.




In this case your damage per second is higher if you chose to go for the lower DPCT spell (fireblast) instead of the frostbolt. So we must consider consequences of choosing a spell. alt text


In the following example are cases of "over casting" and "waiting". alt text



Answer





All AI is search!



When you get into the guts of AI it's amazing how much of it is really search.



  • state: the remaining cooldown of all available spells.

  • fitness: total damage done

  • cost: total time taken

  • branches: any known spell. If spell is still in cooldown just add that value to its cast time.

  • goal: total health of target. The goal has to be a finite amount of damage, so in the case of an unknown target, pick the largest possible health.
    Alternatively, the goal could be spend less than 50 seconds and the search would find the maximal damage that could be done in 50 seconds.



Plug these parameters into a Uniform Cost Search (UCS) and presto, guaranteed optimal battle plan. Even better if you can come up with a heuristic, search with A*, or IDA* and you'll get the same answer much faster.


Some more advantages to using UCS is it can find optimal cast order for much more complicated situations than the one you provided with only 3 variables. Some other aspects that could easily added:



  • damage over time

  • refresh spell to reduce cooldown of other spells

  • haste spell causing other spells to cast faster.

  • damage booster causing other spells to do more damage.


UCS is not omnipotent. It cannot model the benefits of protection spells. For that you'll have to upgrade to alpha-beta search or minimax.

Also it doesn't handle area-of-affect and group fights very well. UCS can be tweaked to give reasonable solutions in these situations, it is not guaranteed to find the optimal solution.


idioms - In ‘have to’, does ‘to’ have the obligation meaning?


When I come upon this sentence: I have to do something, I thought have to of it acts like a phrase, where the meaning is having an obligation. And have to is an auxiliary verb like must. But after bring its question and negative sentence: do I have to do something?, I don’t have to do something, something was wrong, I thought, and looked up CGEL. They say have is not an auxiliary but a catenative verb that takes non-finite complement.


From all this, I get the question: which one, have or to, has the meaning of the obligation? From OLD (to as an infinitive marker,#7), to has the obligation meaning when it gets along with be verb. Then is it still valid that in have to to has the obligation meaning?



Answer



Have to [VERB] is an idiom, a collocation whose meaning cannot be derived from the meanings of the components. It is the entire have to [VERB] collocation which has the meaning, not any one component.


Typical characteristics of an idiom are described in SIL's glossary





  • Its individual components “can often be inflected in the same way individual words in a phrase can be inflected. This inflection usually follows the same pattern of inflection as the idiom's literal counterpart.”


    Thus, although the collocation has the same meaning as the modal auxiliary must, have plays the syntactic role of a lexical verb. Like other lexical verbs it has the finite forms and non-finite forms (an infinitive and participles) which modals lack. Consequently, it is inflected for person and number as well as tense, it can enter into verb constructions such as perfects and progressives, it can be employed in non-finite phrases and clauses, and (to address the specific issue you raise) it can act as the complement of other verbs, including catenative/auxiliary verbs like do. This greater flexibiity is no doubt part of the reason why it has gone a long way toward replace must.




  • It “behaves as a single semantic unit.” Specifically,




    • “It tends to have some measure of internal cohesion such that it can often be replaced by a literal counterpart that is made up of a single word”—viz., must.





    • “It resists interruption by other words whether they are semantically compatible or not.” For instance, although we accept an adverb between have to and the complementary [VERB], the connection between have and to is so strong that it is never interrupted and in fact has unique pronunciations, usually spelled hafta and hasta.



    • “It resists reordering of its component parts.” For instance, we don’t break up have to in cleft constructions:

      okI have to go, but not
      It is to go I have or
      What I have is to go.








  • It “has a non-productive syntactic structure. Only single particular lexemes can collocate in an idiomatic construction. Substituting other words from the same generic lexical relation set will destroy the idiomatic meaning of the expression.” Thus, we cannot substitute possess or obtain for have and keep a similar meaning; even with get, which we can substitute, there is no trace of the have to meaning:



    I possess to go.
    I obtain to go.
    okI get to go.



    But it must be acknowledged that I have got to go, in which have got means have, does mean the same thing as I have to go.


    Still, 4½ out of 5 is a pretty high score.







Historically, to be sure, the idiom arose out of a construction whose meaning can be derived from its components, and doubtless there was a time when its use was sufficiently narrow that its meaning was not idiomatic. But that was long ago.


raytracing - Ray tracing - draw polygon (square/bounded plane)


I'm going on with my own ray tracer as an iPad app for a school project. This is the result with soft shadow, antialiasing, pure reflective and pure transparent object:


enter image description here


Now i want to change the skybox, implemented with the method contained here http://www.ics.uci.edu/~gopi/CS211B/RayTracing%20tutorial.pdf with a real cube that wrap all the scene. In this way I can display the soft shadow projected on the floor. I read a lot of documentation about ray tracing polygon and I understood how to check if the ray intersect the polygon plane. Now my question is: if I want to draw a square, one for each side of my cube that will wrap the scene, how do I check if the point of intersecton is inside the square/polygon (so that i can shade it)? All the documentation seems so vague and incomplete. I can't find a complete explanation with some example, and maybe with some pseudo code, that really explain how to draw a square (bounded plane) in a ray tracer.


Thanks for helping me.



Answer




I finally managed it. I choose one of the most simple method, that could be found here https://sites.google.com/site/justinscsstuff/object-intersection but could also be found in some of the documents linked by wondra in the comments above.


This method work for convex polygon (for other kind odd/even rule, winding number rule or other method must be used). The point is simple: a generalization of the test used for triangle.


Check that the point is always on the left of an edge, by checking the dot product between the normal of the polygon and the result of the cross product between and edge and a vector from the current vertex to the intersection point. Some optimisation could be useful (for example: precalculate the edge list and avoid to calculate them every time). Here is my code in Objective-c.


-(NSMutableDictionary *)intersect:(Ray *)ray {

//Check intersection of ray with polygon ray.
NSMutableDictionary *intersectionData = [self intersectWithPlaneOfPolygon:ray];

if(intersectionData == nil) {


return nil;
}

Point3D *intersectionPoint = [intersectionData objectForKey:@"point"];
NSUInteger numberOfVertex = self.vertexList.count;

for (int i = 0; i < numberOfVertex; i++) {

Point3D *nextVertex = [self.vertexList objectAtIndex:((i + 1) % numberOfVertex)];
Point3D *currentVertex = [self.vertexList objectAtIndex:i];

Vector3D *edge = [nextVertex diff:currentVertex];

Vector3D *edge = [self.edgeList objectAtIndex:i];
Vector3D *vectorWithIntersection = [intersectionPoint diff:currentVertex];

Vector3D *crossProduct = [edge cross:vectorWithIntersection];

float dotProduct = [crossProduct dot:self.normal];

if(dotProduct < 0) {


//Point is outside polygon.
return nil;
}
}

return intersectionData;
}

Tuesday, March 28, 2017

time reference - Past or present tense in sentences regarding thoughts


I am confused in using tenses in sentences regarding thoughts:



I thought that it was interesting.



or



I thought that it is interesting.




Which sentence out of these two is correct and why?




How are the Unreal Development Kit (UDK) and Unreal Engine 4 (UE4) related?



I'm thinking of learning Unreal Engine 4, but it costs money, and I want to try and keep costs as low as possible while I'm learning. In contrast, the Unreal Development Kit is free.


How similar are the two? If I learn UDK first, how easily can I transition to UE4?



Answer



Yes. The UDK is related to UE4 - The UDK is based off of Unreal Engine 3 to which Unreal Engine 4 is the successor.


To the initial end user a number of things have changed. Unreal Engine 4 replaces UDK's Kismet Visual Scripting system with Blueprints. You can do practically everything with Blueprints and in some ways Blueprints can be considered a replacement for UnrealScript.


UnrealScript is also gone. Instead of this you would now use Blueprints, or C++. Unrealscript is object-oriented and had some similarities with C++ and Java syntax, so while a lot has changed using C++ won't be too unfamiliar with someone who is well versed in UDK.


The interface has changed, things have moved around and what not - all of this will take some time but thanks to the comprehensive documentation it shouldn't be to difficult to figure out what is what.


One of the biggest changes, is that with UE4 all of the engine's source code is made available to you. For most users this won't matter, but for those that it does matter to - this is a very good thing. To gain access to the Unreal Engine 3 source as a UDK user - you had to license UE3.


The running suggestion has been, if your game is currently near completion in UDK then stick with UDK. Otherwise it's worth it to check out UE4. UDK projects will not open in UE4 - and you'll have quite a bit of work porting things over (as the scripting system, and visual scripting have all been replaced). That said Epic does provide a handy Transition Guide for people leaving UE3 (and UDK) for UE4.


What's the meaning of 'off' in this context?



"But we don't feet like leaving, do we, boys? We've eaten all our food and you still seem to have some."
Goyle reached toward the Chocolate Frogs next to Ron - Ron leapt forward, but before he'd so much as touched Goyle, Goyle let out a horrible yell.

Scabbers the rat was hanging off his finger, sharp little teeth sunk deep into Goyle's knuckle - Crabbe and Malfoy backed away as Goyle swung Scabbers round and round, howling, and when Scabbets finally flew off and hit the window, all three of them disappeared at once.
(Harry Potter and the Sorcerer's Stone)



Scabbers the rat still attached to Goyles’s hand. And yet why the preposition off, that means coming off the finger, is used?



Answer



When a person or item is "hanging off (something)" in the sense used, it is attached to that (something) at or near its upper end, while its lower end is loose and can easily flap or flail about. In the example sentence, Scabbers is attached to the finger via his teeth, while his body and legs are unsupported. "Hanging off" generally has the connotation that the hanging item is not firmly connected and may be subject to an undesired, potentially damaging fall.


"Hanging on (something)" is a much more generic description, with the implication that the item being hung is firmly or appropriately attached to the (something). Generally, if a fall is possible, it is either very unlikely under normal circumstances or will not be damaging. You can hang a picture on the wall, but you can't hang a picture off the wall because the bottom of the picture won't be flapping free (it'll be against the wall, not moving). You can hang your coat on the coathook, but you can't hang your coat off the coathook (because the coathook is designed to have coats hanging on it). Also, in contrast to "hanging from" (see below), "hanging on" does not imply that the hanging object is mostly below the supporting thing; more likely it will be beside it or around it.


"Hanging from (something)" is similar to "hanging off" in that it implies that the lower end of the hanging object is loose, and more specifically that a significant portion of the hanging object is below the lower edge of the supporting something; but it doesn't have the same sense of an impending fall. I would say, for example, that "a banner hanging from the top edge of a building" implies that it was placed there on purpose and properly secured, while "a banner hanging off the top edge of a building" implies that it was placed on the roof, and accidentally got blown partway over the edge.


"Hanging by (something)" is used to describe the element connecting you to your anchor point: Scabbers is hanging OFF a finger, BY his teeth; a chandelier hangs FROM the ceiling, BY a chain. This is often used in the idiomatic phrase "hanging by a thread" to describe something that is in a very precarious situation where a disastrous fall is likely at any moment.


These are good general guidelines, I think; but @StoneyB does give some examples that contradict them, which is pretty typical of English.



2d - State of the art in image compression?


I'm looking for good algorithms for compressing textures offline (ie decompressing them at install or load time, I know about using DXT/3DC to save runtime video memory, but these create awful visual artifacts while often being larger in disk than a lossless format like JPEG-LS, even after using LZMA or something for entropy). Since the data will only need to be read by my program, I don't need a portable format like JPEG, which means I'm open to more experimental algorithms. To be clear, I'm concerned with reducing the download size of my game; I will be using them uncompressed on the video card.


I'm looking for formats that support different features... For example, 10 bits-per-channel color (HDR) for higher color fidelity under dynamic lighting conditions, normal map formats(16 bits in X and Y), alpha channels, etc, etc...


NOTE: My particular use case is for 2.5D backgrounds. In their uncompressed format, they are 10 bytes per pixel stored in 3 textures:


4 bytes - color  - D3DFMT_A2R10G10B10
4 bytes - normal - D3DFMT_G16R16F
2 bytes - depth - D3DFMT_R16F


I store these in 1024x1024 tiles, and since every area has a different background, these can get really large really fast. However, I think knowing more about image codecs in general will help with finding an efficient method to compress these.



Answer




I'm looking for good algorithms for compressing textures offline



What is your purpose in compressing these images? There are generally two reasons to compress a texture:




  1. You want to make the texture smaller so that it takes up less memory on the GPU.





  2. You want to make the texture smaller so that it takes less time to load/requires less harddrive space, resulting in a smaller download.




The most important thing you can understand is this: you cannot satisfy both of these at once. You must pick one: smaller GPU memory, or smaller disk space/load time.


So let's look at both cases.


Case 1


Your possibilities here are the various "Block Compressed" types, using the D3D 10 language for them.


I'll assume you know what BC1-3 are, since they are formerly known as DXT1, DXT3, and DXT5, respectively. BC4 and BC5 are for 1 and 2-channel formats respectively. These two can be either unsigned or signed normalized.


BC5, 2-channel compressed, can be used for storing tangent-space normal maps, with the third coordinate (Z) reconstructed in your shader. It does a reasonably good job.



BC6H and BC7 are quite new. BC6H is a compressed, floating-point format. BC7 is a more accurate way to compressed RGBA color data. You can only get BC6H and BC7 support from DX11-class hardware.


The OpenGL internal formats for these are as follows:



  1. BC1: GL_COMPRESSED_RGB_S3TC_DXT1_EXT and GL_COMPRESSED_RGBA_S3TC_DXT1_EXT

  2. BC2: GL_COMPRESSED_RGBA_S3TC_DXT3_EXT

  3. BC3: GL_COMPRESSED_RGBA_S3TC_DXT5_EXT

  4. BC4: GL_COMPRESSED_RED_RGTC1 and GL_COMPRESSED_SIGNED_RED_RGTC1

  5. BC5: GL_COMPRESSED_RG_RGTC2 and GL_COMPRESSED_SIGNED_RG_RGTC2

  6. BC6H: GL_COMPRESSED_RGB_BPTC_SIGNED_FLOAT_ARB and GL_COMPRESSED_RGB_BPTC_UNSIGNED_FLOAT_ARB

  7. BC7: GL_COMPRESSED_RGBA_BPTC_UNORM_ARB



The BC1-3 and BC7 formats can also be in the sRGB colorspace. OpenGL provides similar sRGB variants and GL_COMPRESSED_SRGB_ALPHA_BPTC_UNORM_ARB.


Obviously, none of these are lossless.


As I understand it, BC6H and BC7 are not widely supported in tools, though I seem to recall that NVIDIA's texture tools do support them.


Case 2


If you're talking about normal 8-bit-per-channel images, you already know your options. But for more exotic texture types, like floating-point, 10-bit-per-channel, and normal maps, you're out of luck.


The simple fact is that most general image compression software is intended for regular old images. Most people who deal with floating-point textures don't really care enough about loading speed or disk space to bother with compressing them. They're big, but they've accepted that. And it's really hard to losslessly compress floating-point values.


So really, just stick with known paths for regular images, and use zlib or 7z or some other general compression method for the others. That's generally about the best you can do.


Do note that any lossless image compression technique can be made to work on any pixel data, so long as that data can fit. So you can always put a normal map in a PNG, even if it is a 2-component normal map (the third component is 0). It may not compress well, but it will probably beat out pure-zip.






4 bytes - color - D3DFMT_A2R10G10B10



If you're storing a color, a normal, and a depth, then I can only assume that you are doing some form of deferred rendering-type thing here. That is, the color is the diffuse reflectance, not the light radiating from the background.


Given that, 10-bit colors are pretty much overkill. You don't need that level of color accuracy for a diffuse reflectance value; you can get the same level of accuracy from an sRGB color value. HDR is all about the lighting; you can do HDR just fine with lower color-depth.


Once you're dealing with regular sRGB colors, you can start using standard image compression techniques.



4 bytes - normal - D3DFMT_G16R16F



There's no reason you can't take that down to G8B8 using signed-normalised values. Games do this all the time, and normals are not sensitive enough to need 16 bits worth of accuracy. And they're normals, so even using floating-point is way overkill.



Whether you can live with the artifacts produced by feeding it through BC5 compression is up to you. But you can at least go down from 32-bpp to 16-bpp with no detectable loss of fidelity.


Ultimately, everything is a tradeoff. You need to decide how much image quality you are willing to give up to get your game's data size under control. And since it's a tradeoff, you need to run actual tests to see where you are willing to draw that line.


indefinite article - Is it an obligatory or a obligatory


I was wondering if "An" should be used with obligatory or "A". I understand that since obligatory begins with a vowel I should use "An" but I see people are using "A" more frequently than "An"


To me "A obligatory like" sounds right, as in "a Facebook like".


Also .. Is "and a obligatory comment" correct or is it "and an obligatory comment" ?



Answer



To me only "an obligatory" sounds right. "a obligatory" is not only more difficult to pronounce but also sounds pretty odd (to me at least). Google ngram seems to agree with me:



Google Ngram


Monday, March 27, 2017

sentence construction - "Playing games (by myself / myself / alone)" - Grammar and nuance?


Q1. Here, I think all of them are right to use, but I want to know if there's any difference in nuance.



Playing computer games (by myself / myself / alone) can be fun for a certain amount of time, but it usually becomes boring.



Q2. I'd like to know if 'The fact' can be used as a subject here, and I know I can use 'alone' here, but I want to know if the others are possible to mean the same thing. And which one makes the most sense: after that / after a while / at some point?




The fact that I have a girl friend can be great for a certain amount of time, but (after that / after a while / at some point) I start to think I wish I was (alone / by myself / myself).





meaning in context - What is the proper use of the present progressive form, especially of "to have"?


When I read @ctype.h's question 'Is "I am having a code" grammatically correct?', I thought that @Mark Beadles brought up a good point:



This is two questions, though you may not realize it. The first is the proper use of "I am having a NNN", the present progressive of have. [1]
...
The present progressive and the indefinite article are two of the more troublesome aspects of learning English, especially coming from languages that have neither. 2




As I (a native English speaker) thought about this, I don't think I could properly explain the present progressive to an ELL, even in the seemingly straightforward example @ctype.h gave:



I’m having a code which (does such and such, followed by a fragment of code)



So, really, what is the proper use of the present progressive form, in terms an ELL could understand?



Answer



I'm not a native speaker, and I frequently need to explain the usage of the present progressive form to other non natives.


What I normally tell them is that they need to use this form when referring to an action which is taking place at the time or around the time of speaking, whereas they should use the present simple form in case of habitual situation. I stress the idea of action, adding that there are many verbs which are considered stative and which should not be used in progressive forms.


More specifically, coming to the verb have, I tell them that it can be used in the progressive form when it isn't an auxiliary verb and when its meaning is not "possess".


So, for example, they can say




I'm having a cup of tea



meaning that they are drinking it at the time of speaking, or



I'm having a wonderful time



to indicate they are enjoying themselves, but it is wrong to say



I'm having a car




because in this context the verb "to have" means "to own" or "to possess". Normally it works.


Addition : in my answer, I haven't considered the progressive form with a future meaning, where the problem of making a distinction between this and other future forms is as relevant as any.


word request - What's the opposite of "vegetarian"?


If someone hates salad, and mostly eats meat, they basically are the opposite of vegetarians. What do you call those people in English? I guess there is no such a thing as meatarian.




word usage - Very confused- "go home" or "come home"?


Ok, this website says "Come is used to show movement toward the speaker or the person being spoken to"


& "Go is used to show movement away from the speaker or the person being spoken to"


Let say Tom is in his office & his mom is at home.


It's 5 PM & Tom says to his peers in his office "I go home now". The movement is away from the speaker "Tom" & the listeners "his peers".


Now, we can have 1 conversation when his mom calls to him:


Conversation 1:


Mom: Can you come home soon? (we use "come" because the movement is toward the speaker "Mom")



Tom: I come home soon. (we use "come" because the movement is toward the listener "Mom")


Conversation 2:


Mom: Can you go home soon? (we use "go" because the movement is away from the listener "Tom")


Tom: I go home soon. (we use "go" because the movement is away from the speaker "Tom")


You can say the Conversation 2 is wrong but I do apply the above guideline when creating the conversation 2.


Then the question is that the above guideline could have some shortcomings.



Answer



Like so often, someone has created a "rule" to make things easier, but they only succeeded in making things more complicated.


One could adapt the "rule" to "when the distance between the speaker and listener will get smaller, use come. Otherwise use go".


In general, you go away and you come towards something.



Tom tells his colleagues he's going home, he is leaving the office.


In the conversation between his mom and Tom, assuming mom's at home, Tom's movement towards home means they will get closer, so he's coming towards her, coming home. It doesn't matter who starts the conversation.


usage - I'm confused with this 'otherwise'



In linguistics, an adjunct is an optional, or structurally dispensable, part of a sentence, clause, or phrase that, if removed or discarded, will not otherwise affect the remainder of the sentence. Example: In the sentence John helped Bill in Central Park, the phrase in Central Park is an adjunct. (Wikipedia | Adjunct (grammar))



Is it proper?

Thank you.



Answer



Could you please tell me what makes you think that it's not proper? I think it's as proper as can be. You can understand this passage the following way:



an adjunct is an optional part of a sentence that even removing or discarding it will not, in any way, affect the remainder of the sentence



In other words, removing an adjunct from a sentence has no effect on the rest of the sentence whatsoever.


cocos2d iphone - cocos2dx: RunningScene != Scene You just Replace


I have this code for cocos2d-x 3.x:


void MainMenu::StartGame(cocos2d::Ref* pSender)
{
auto director = Director::getInstance();
auto newScene = Scene::create();
director->replaceScene(newScene); //run
Scene *maybeNewScene = director->getRunningScene();

CCASSERT ( maybeNewScene == newScene , "...") ; //Assert Fail

auto hudLayer= HUD::create();
hudLayer->setPosition(Vec2::ZERO);
scene->addChild(hudLayer, Z_NORMAL);
}

But When I get runnign scene by director->getRunningScene(), It gives me the scene which was running before calling director->replaceScene(). (the scene which is going to be destructed soon)


When I checked replaceScene() function, This function places newScene in some varibale called _nextScene but not assigned it to _runningScene .


Question: How can I access newScene, just some lines further? ( for example in HUD::init())




Answer



Quoted from @mannewalis



The reason your scene is not the running scene is because it it only gets set the next frame, or even later if there is a transition running.


You could add you HUD create code in the new scene onEnter method, I believe runningScene will be set by then, but possibly not if there is a transition running, in which case you should add your HUD create code to the onEnterTransitionDidFinish method. You can even use a lambda method to do it by calling setonEnterTransitionDidFinishCallback.



Sunday, March 26, 2017

c++ - OpenWorld SceneGraph management and optimization



I have a SceneGraph class which for now is just a simple list implementation, and the only optimization I've planned so far is a check is something like this:


//GetDistance returns the distance between 2 objects
float maxViewDistance = 10.0f;

for(int i = 0; i < sceneList.count; i++)
{
if(GetDistance(sceneList[i], playerObject) < maxViewDistance)
{
Renderer.Draw(sceneList[i]);

}
}

I plan on filling the SceneList when loading the game from a file which has a bunch of objectIDs and their positions, and then loading each object data (like .FBX, .BMP) somewhere (in another file). But since I don't intend on having loading screens (except when the game starts), wouldn't it become too much to iterate when drawing? For example if I have a sceneList with a million objects for the entire game, even if it's just checking an if for each objects, wouldn't that hurt performance?


I was thinking in making a sublist that will get populated only for the objects I'm currently drawing (that are less than the maxDistance I set), but then I'd need to update this subList everytime the player takes a step, and I'd need to check for all objects in my entire objectList to see which are going to be draw next, so I don't think it'll help anyhow.


Also, for physics, I was also iterating through my sceneList and forcing every object to my physics rules (only gravity for now), but if I add for collision checks, what's the best way to do it?


//Inside my game update function
for(int i = 0; i < sceneList.count; i++)
{
UpdatePhysics(sceneList[i]);

}

[...]

void UpdatePhysics(Object* object)
{
//I'd need to check for collisions against **all** objects inside my sceneList again? Isn't there a better way to do this?
}

How games such as WoW (open world with no loading screens between large amounts of zones) do? And for Physics, such as Deus Ex, where any object you throw in the air will be affected by gravity and will have collision against wherever is in it's path?





tense - Simple Past vs. Present Perfect


Sometimes I feel difficulty telling the difference between simple past and present perfect.


Given a picture like this:


A man fell off his bike.



which one is correct between these two? Or in what sense are they different?


A. He fell off his motorcycle.


B. He has fallen off his motorcycle.



Answer



Simple past: An action or event happened in the past.


Present perfect: An action or event has happened in the past and it's might happen again in the future.


A. He fell off his motorcycle.
This just means "he just fell off his motorbike." Example: Mike fell off his motorbike. Do you want to see him at the hospital?


B. He has fallen off his motorcycle.
This means that he fell off the motorbike (yesterday) and there is a chance that it will happen again in the future. Example: Mike has fallen off the motorbike again. This is the third time in a month.



grammar - Is it grammatically correct to use two past continuous tenses in a single sentence?


This question just suddenly somehow popped up in my mind.


Usually what I encountered in a sentence is in the pattern "past tense + past continuous"


E.g.:



I was doing homework when my mum came back.



I would like to know whether it is grammatically okay to use two past continuous tenses in one sentence?


E.g.:




I was laughing so hard when I was watching the video.





Saturday, March 25, 2017

ambiguity - How does the "Dalai Lama walks into a pizza shop..." joke work?


On YouTube, there's that famous joke the Dalai Lama didn't understand — and neither did I. It even made headlines in my part of the world, and on some of the sites I frequent, yet nobody ever bothered to explain it. I am at a loss. I suppose pretty much every non-native speaker will have trouble getting it.




The Dalai Lama walks into a pizza shop and says "can you make me one with everything?"



Is this some sort of pun? Double-entendre? A top-voted comment on YouTube says, "The joke is based on ambiguities of an expression, not the ideal joke to crack with a foreigner." Well, duh. Thanks for nothing. I looked up every single word of it in several dictionaries, including can, shop, one, make, with, walk, and each of these has a multitude of meanings, and I have no idea how they work together to create something funny.



Answer



This is indeed a pun.


To make someone something can mean "to create something for someone", as in, I made her a sandwich. But it can also mean "to change someone into some thing or state", as in, I made her angry; Zeus made her (into) a cow.


To be one with something is a spiritual expression meaning...something spiritual. When people say they are one with the universe, they mean they experience some sort of supernatural bond with the entire universe. Don't ask me how it works. Here everything is equivalent to the universe. This is known as nondualism. The Dalai Lama is known for his spirituality.


But one can also stand for one pizza, as in can you make me one [pizza] with [all available toppings]: everything means "every topping/ingredient you have that you can put on a pizza".


xna - How to sync the actions in a mutiplayer game?



I connect the clients with UDP (its a peer to peer connection on a multicast network) and the clients are sending their positions in every frame (in WP7 it means the default 30 FPS) to each other. This game is kinda a pong game, and my problem is the next: whenever the opponent hits the ball the angle will not be the same on both mobiles. I think its because the latency (1 pixel difference can cause a different angle). So my question is: how can I sync the hitting event?



Answer



In a multiplayer game, every gameplay-relevant decision should be made by only one system. When multiple systems make a decision, like in your case the trajectory of the ball, and they disagree due to timing issues, the game gets out of sync.


When each client calculates the angle only after its own collisions and sends the new trajectory of the ball to the other, you will get the best results.


But note that this allows the players to cheat. They could manipulate their client to always tell the other client that they hit, and that the ball is now flying in an impossible to catch trajectory. The only way to fix that, is to introduce a neutral referee in form of a central server which receives the users input, calculates all game mechanics, and sends the results to the clients.


By the way: You could safe a lot of bandwidth when you would send the client positions only when they have actually changed, and not on every single frame.


Unity: set y rotation messed up the object


I created 3 boxes, then align the three to form some sort of a gate, I set the parent to an empty object so that I can easily move it. And I add box collider inside the gate.



normal object


When I try to rotate it on Y axis, the boxes deforms


deformed object


I just want to rotate the Y, not deformed it. Here are the object


the object


desc 1 desc 2 desc 3 desc 4


Is this related to the transformation I created before? Something that requires to apply the transformation first? like in blender3d?



Answer



When you nest objects in the Hierarchy, all the transformations applied to the parent get applied to its children.


That means that if a parent has a non-uniform scale applied, its child objects will be distorted non-uniformly - getting stretched or squashed more in one direction than another.



When you add a child to a parent, Unity (by default) automatically re-calculates its local transformations to compensate for the parent's, so the net transformation on it remains as close as possible to what you had before parenting it. This can mask the effect of the parent's transformation at first.


As you rotate the child object relative to the parent's scaled axes, you change which way it's getting squashed or stretched by the parent's non-uniform scale.


eg. if I have a narrow wall perpendicular to a highly stretched axis of the parent, and gradually turn the wall until it's parallel to that axis, the wall will get longer and longer.


As a result, it's usually most predictable & controllable if you try to keep non-uniform scales only at the leaf level of your transform hierarchy (objects with no children of their own). You can use it to stretch your cubes into skinny beams and flat walls. Then when you need to combine those objects into larger constructs, put them inside container objects that have only uniform scaling applied (ideally 1, 1, 1), and keep your scales uniform at all parent levels.


procedural generation - What is the most appropriate path-finding solution for a very large proceduraly generated environment?


I have been reading quite a bit in order to make the following choice: which path-finding solution should one implement in a game where the world proceduraly generated, of really large dimensions?


Here is how I see the main solutions and their pros/cons:



1) grid-based path-finding - this is the only option that would not require any pre-processing, which fits well. However, as the world expands, memory used grows exponentially up to insane levels. This can be handled in terms of processing paths, trough solutions such as the Block A* or Subgoal A* algorithms. However, the memory usage is the problem difficult to circumvent;


2) navmesh - this would be lovely to have, due to its precision, fast path calculation and low memory usage. However, it can take an obscene pre-processing time.


3) visibility graph - this option also needs high pre-processing time, although it can be lessened by the use of fast pre-processing algorithms. Then, path calculation is generally fast too. But memory usage can get even more insane than grid-based depending on the configuration of the procedural world.


So, what would be best approach (others not present in this list are also welcome) for such a situation? Are there techniques or tricks that can be used to handle procedural infinite-like worlds?


Suggestions, ideas and references are all welcome.


EDIT:


Just to give more details, one should see the application I am talking about as a very very large office level, where rooms are generated prodecuraly. The algorithm works like the following. First, rooms are placed. Next, walls. Then the doors and later the furniture/obstacles that go in each room. So, the environment can get really huge and with lots of objects, since new rooms are generated once the players approaches the boundary of the already generated area. It means that there will be not large open areas without obstacles.



Answer



Given that the rooms are procedural built, portals created and then populated, I have a couple of ideas.


A* works really well on navigation meshes, and works hierarchically as well. I would consider building a pathfinding system that works at two levels - first, the room by room level, and second within each room, from portal to portal. I think you can do this during generation at an affordable rate. You only need to path from room to room once you enter it, so it's very affordable from a memory/cpu cost.



High level A* can be done by creating a graph of each portal and room - a room is the node, and the 'path' or edge is the portal to another room. The cost of traversal has some options - it can be from the centre point of the room to the centre point of the other room, for example. Or you might want to make specific edges from portal to portal with real distances, which is more useful, I suspect. This let's you do high level pathfinding from room A to room B. Doors can be opened and closed, enabling or disabling specific paths, which is nice for certain types of game. Because it's room/portal based it should be pretty easy and affordable to calculate - just distance calculations and graph book keeping. The great thing about this is it reduces the pathfinding memory costs dramatically in large environments since you are doing only the room-to-room finding.


The harder part will be the low level A* because it should be polygonal navigation mesh. If each room is square, you can start with a polygon. When you place obstacles, subtract the area occupied from the polygon, making holes in it. When it's all finished you'll want to tesselate it into triangles again, building up the graph. I don't think this is as slow as you think. The difficult part is performing the polygon hole cutting, which requires a good amount of book keeping on that kind of stuff, but it is well documented within half-edge structures, and established computer science graphics books. You can also perform this generation lazily, in a background graph, as you don't actual need the A* results of this level until someone is in the room - the high level takes care of basic path planning for you. Someone may never even enter the room in a run, because the high level A* never leads them there.


I know I have glossed over the low level navigation mesh generation, but I think it's one of those things you set your mind to and solve and then it's done. There are a bunch of libraries out there like CGAL (http://www.cgal.org) and others that can do this stuff, but really to get it going fast you might need to write it yourself so you only have the things you need.


Alternatively, you could make each room be a grid, and the obstacles fill up parts of the grid, and then do all the standard grid smoothing algorithms, but I like navmesh data as it is small and fast.


Hope that makes some sense.


c++ - Multiple buffering in OpenGL on Windows



  1. What is the most common way modern games perform triple buffering ?

  2. What does the SwapBuffers exactly do in terms of OpenGL states ?

  3. Is it possible to perform double and triple buffering independently of window system (for example, by manipulating with glDrawBuffer or by using FBO/PBO) ? If so, any hints ?

  4. Does 3. even make sense in terms of performance and flexibility ?




Answer



This quote answers most of your questions:



You cannot control whether a driver does triple buffering. You could try to implement it yourself using a FBO. But if the driver is already doing triple buffering, your code will only turn it into quadruple buffering. Which is usually overkill.



http://www.opengl.org/wiki/Common_Mistakes#Triple_Buffering



What does the SwapBuffers exactly do in terms of OpenGL states ?



From what I understand, it doesn't really do anything other than flushing GL and swapping the buffers.



Friday, March 24, 2017

career - Need guidelines for studying Game Development



I've completed my graduation in Computer Science and currently working as a Software Engineer in a software company. I was wondering if I can build my career in Game Development. If so, what should be my approach. I've a few questions:



  1. Which universities to apply for masters? Preferably in Canada. Scholarships available? How shall I prepare myself before applying which shall give me an edge or advantage over others?

  2. I know Java, C#, PHP etc. I dont think these languages will be needed in Game Development. In that case, what languages shall I focus on from now?

  3. How do I get some ideas about IDE/Engines/Platform of game development? I'm not talking about flash/browser games.

  4. Please suggest me anything you want as I don't know much about it so I'm most likely to miss the most important questions. Feel free to make this thread a starter guide for those interested in perusing their career in game development. Post every relevant information.


EDIT: I can see a lot of people suggested to build a small project/game. If so, please suggest me how do I start a small game developing (maybe a clone to some existing small games ie pacman, brick game etc) from start to end.




Answer



For more details and proper guidelines check this out:


http://www.sloperama.com/advice.html


Does the linear attenuation component in lighting models have a physical counterpart?


In OpenGL (and other systems) the distance attenuation factor for point lights is something like 1/(c+kd+sd^2), where d is the distance from the light and c, k and s are constants.



I understand the sd^2 component which models the well known physically accurate "inverse square law" attenuation expected in reality.


I guess the constant c, usually one, is there to deal with very small values of d (and divide-by-zero defense perhaps?).


What role does the linear kd component have in the model, (by default k is zero in OpenGL). When would you use other values for k? I know that this is called the "linear attenuation" component, but what behavior does it simulate in the lighting model? It doesn't seem appear in any physical model of light that I'm aware of.


[EDIT]


It has been pointed out by David Gouveia that the linear factor might be used to help make the scene 'look' closer to what the developer/artist intended, or to better control the rate at which the light falls off. In which case my question becomes "does the linear attenuation factor have a physics counterpart or is it just used as a fudge factor to help control the quality of light in the scene?"



Answer



Light, from point-like sources, falls of with the square of the distance. That's physical reality.


Linear attenuation is often stated to appear superior. But this is only true when working in a non-linear colorspace. That is, if you don't have proper gamma correction active. The reason is quite simple.


If you're writing linear RGB values to a non-linear display without gamma correction, then your linear values will be mangled by the monitor's built-in gamma ramp. This effectively darkens the scene compared to what you actually intended.


Assuming a gamma of 2.2, your monitor will effectively raise all of the colors to the power of 2.2 when displaying them.



This is linear attenuation: 1/kd. This is linear attenuation with the monitor's gamma ramp applied: 1/(kd)^2.2. That's pretty close to a proper inverse-squared relationship.


But the actual inverse squared: 1/sd^2 becomes: 1/((s^2)(d^4.4)). This makes the light attenuation fall off much more sharply than expected.


In general, if you're using proper gamma correction (like rendering to an sRGB framebuffer), you shouldn't use linear attenuation. It won't look right. At all. And if you're not using gamma correction... what's wrong with you ;)


In any case, if you're trying to mimic reality, you want inverse-squared (and gamma correct). If you're not, then you can do whatever you need to for your scene.


rts - Efficient fog-of-war visibility searching


Every game I look into that uses fog of war tends to have the AI ignore fog of war completely. I'm starting to see why.


I have a RTS game I'm working on, with lots of units moving around. All units are on a grid, and I use a predefined search pattern to scan the grid points around the unit to find the nearest target. So far so good.



However, I have another unit type who's job it is to run around the map and collect scrap from destroyed units. As with combat units, I want it to look around and select the closest scrap it can find. However, unlike combat units, this search isn't limited to it's own vision radius. It should be able to take advantage of the player's entire fog of war. If another unit or building has revealed some scrap, it should be available for the unit to go pick up.


While scanning the vision radius of a unit is simple, I'm having trouble thinking of an efficient way to determine which grid points to scan in this much larger case. Any grid points within view of a building will be fairly static, but things other units see will change every frame, as they are constantly moving.


The two options I see are to record per-player visibility flags within the grid points themselves, and then (somehow) update those flags each frame, or to just loop through all units and buildings and perform individual searches to build a list of visible scrap. The first would be difficult to keep updated without re-scanning the entire map, and the second would be very redundant if there were many units or buildings in a small area.


Is there another method I could use to build a list of visible scrap within a player's entire fog of war view, given that that view is constantly changing?




auxiliary verbs - Does anyone "has" or "have"


I have read a similar question here but that one talks about the usage of has/have with reference to "anyone". Here, I wish to ask a question of the form:



Does anyone has/have a black pen?



What is the correct form of verb which should be used here? I understand that for "anyone", it should be has, as in:



Has anyone got a black pen?



But my doubt here is because of the auxilliary "does" in the question. Will that cause any change to the choice of has/have?




Answer



When using auxillary or helping verbs, the first verb is conjugated according to subject, but the second part of it is fixed.


Take present progressive tense, for an example:



I am going to the park.


He is going to the park.


We are going to the park.



The basic construction here is {to be} + {-ing form of verb}. The {to be} is conjugated according to subject, but not its helping verb - it'll always be "going" in this example.


This is the same with {to do} + {plain form of verb}, which is the emphatic form of a verb, and often used for negative and interrogative expressions.




I do go to the park from time to time.


He does go to the park from time to time.


Do you go to the park from time to time?


Does he go to the park from time to time?


Does anyone go to the park from time to time?



Anyone is singular, so the first verb is conjugated accordingly, but not any subsequent helping verb.


Thursday, March 23, 2017

mmo - How can I represent location in a massive world?


I've been thinking about No Man's Sky a lot recently and all the technical challenges they must face. For example, how on earth do you store a players location in a world that is so enormous?



I assume x,y,z isn't feasible. I notice the advertised number of planets (18 quintillion) is exactly double the maximum integer you can store on 64-bits, if that is relevant.


From watching tech videos of him (Sean Murray) describing the architecture, he seems to say everything is generated by formula where the inputs are x and y. Obviously he's simplifying, but how might one accomplish this?




actionscript 3 - Most efficient 3d depth sorting for isometric 3d in AS3?


I am not using the built in 3d MovieClips, and I am storing the 3d location my way.


I have read a few different articles on sorting depths, but most of them seem in efficient.


I had a really efficient way to do it in AS2, but it was really hacky, and I am guessing there are more efficient ways that do not rely on possibly unreliable hacks.


What is the most efficient way to sort display depths using AS3 with Z depths I already have?



Answer



If you're talking a tile-based isometric game, you have a fixed number of different depths that are bounded between some known nearest and farthest depth. In that case, it's a perfect candidate for a pigeonhole sort, which has the best possible algorithmic complexity.


Just make an array where each index corresponds to a depth, and each element is a collection of entities at that depth. Sorting is just (in pseudo-code):



sort(entities)
buckets = new Array(MaxDistance)

for index in buckets
buckets[index] = new Array
end

// distribute to buckets
for entity in entities
distance = calculateDistance(entity)

buckets[distance].add(entity)
end

// flatten
result = new Array
for bucket in buckets
for entity in bucket
result.add(entity)
end
end

end

And that's from a completely unsorted collection. An even better option is to simply persist the buckets and keep the entity's bucket location updated when its depth changes.


the meaning of "modelling"



The goldsmith Cornelys has not been paid for the cradle he made for the king's last child, the one that never saw the light; he claims for twenty shillings disbursed to Hans for painting Adam and Eve on the cradle, and he is owed for white satin, gold tassels and fringes, and the silver for modelling the apples in the garden of Eden.


— Wolf Hall by Hilary Mantel



Does the "modelling" here mean "to give a three-dimensional appearance to, as by shading or highlighting" as in # 5 of The Free Dictionary?




phrase usage - The Times is a highly (respected or respectable) journal?


What is the correct usage?



The Times is a highly (respected or respectable) journal.



And if we changed the sentence to this, would you change your choice?




The Times is a highly (respected or respectable) journal all over the country.





How to draw a smooth circle in Android using OpenGL?


I am learning about OpenGL API on Android. I just drew a circle. Below is the code I used.


public class MyGLBall {


private int points=360;
private float vertices[]={0.0f,0.0f,0.0f};
private FloatBuffer vertBuff;


//centre of circle

public MyGLBall(){

vertices=new float[(points+1)*3];

for(int i=3;i<(points+1)*3;i+=3){
double rad=(i*360/points*3)*(3.14/180);
vertices[i]=(float)Math.cos(rad);
vertices[i+1]=(float) Math.sin(rad);
vertices[i+2]=0;
}
ByteBuffer bBuff=ByteBuffer.allocateDirect(vertices.length*4);
bBuff.order(ByteOrder.nativeOrder());
vertBuff=bBuff.asFloatBuffer();
vertBuff.put(vertices);

vertBuff.position(0);


}

public void draw(GL10 gl){
gl.glPushMatrix();
gl.glTranslatef(0, 0, 0);
// gl.glScalef(size, size, 1.0f);
gl.glColor4f(1.0f,1.0f,1.0f, 1.0f);

gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertBuff);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDrawArrays(GL10.GL_TRIANGLE_FAN, 0, points/2);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glPopMatrix();
}

}

It is actually taken directly from here The circle looks pretty good. But now I want to make the boundary of the circle smooth. What changes do I need to make to the code? Or do I need to use some other technique for drawing the circle?



Thanks.



Answer



With polygon-based graphics, the only option you have to better approximate a circle is to subdivide further. 720 triangles will result in a smoother circle, but 1440 will give you an even smoother circle, but 2880...


A perfect circle, created using polygons, would require an infinite amount of infinitesimally small polygon sections (in other words, it just isn't possible, in theory). Practically, however, if you subdivide enough times, you may reach a point where the length of the polygon section is equal to or smaller in size than a pixel, meaning that, for purposes of your framebuffer, you have achieved a "perfect" circle.


The other option, of course, is to cheat: render a flat quadrilateral oriented perpendicular to the camera, and texture it with an image of a sphere. 2 triangles, 1 texture of sufficient resolution: smooth circle, and a hell of a lot less vertices for your GPU to worry about.


linear algebra - Calculate initial velocity for trajectory given duration, launch angle, and distance


I need my entity to go from point A to point B with a given launch angle over a given time.


I've been able to figure out the initial velocity without the constraint of the launch angle, but I can't figure out how to incorporate that into the equation.


After launch the entity is only affected by gravity, and point A and B are not necessarily at the same height.


Thanks!



Answer



Note: the situation you describe (angle, start, end, and time of flight) is overdetermined. For many (most) inputs, there will be no solution that simultaneously satisfies all of these constraints.


(eg. If we take the start, angle, time, and just the horizontal coordinate of the endpoint, that already completely determines the launch speed - with no guarantee that this hits the necessary vertical coordinate of the endpoint!)


To address this I'll remove the time of flight constraint, so we're finding a ballistic trajectory from the start to the end point, with a given firing angle. The launch speed and duration before it hits the target will be left as unknowns to solve for.





I'll assume we're in 2D here. If 3D, you can reduce it to the 2D case by projecting onto the plane containing the start/end points and the gravity vector.


We'll subtract out the start position, so it becomes zero and disappears from our equations, and our target vector represents the offset from the start to the end of the trajectory, fired with an initial inclination of angle in radians.


Our initial velocity is then the speed of the projectile times a unit vector in the firing direction:


initialVelocity = speed * Vector2(cos(angle), sin(angle))


With this, the projectile's position at time t is given by:


positionAt(t) = initialVelocity * t + (gravity/2) * t*t


(where gravity is a downward-pointing vector, like (0, -9.8))


Taking just the horizontal, x component at the time of impact T:


target.x = initialVelocity.x * T
target.x = speed * cos(angle) * T

T = target.x/(speed * cos(angle))

Substituting this into the vertical, y component:


target.y = speed * sin(angle) * T + (gravity.y/2) * T^2
target.y = speed * sin(angle) * target.x / (speed * cos(angle)
+ (gravity.y/2) * (target.x/(speed * cos(angle))^2
target.y = target.x * tan(angle) + (gravity.y/2) * (target.x^2/(speed^2 * cos^2(angle))
target.y - target.x * tan(angle) = (gravity.y*target.x^2/2*cos^2(angle))/(speed^2)
speed^2 = (gravity.y*target.x^2)/(2*cos^2(angle)*(target.y - target.x*tan(angle))


Finally, since a speed is positive by definition (direction coming from the unit vector above):


speed = sqrt((gravity.y*target.x^2)/(2*cos^2(angle)*(target.y - target.x*tan(angle)))


Ta-dah! :D




A few things to note: this becomes undefined...




  • if angle is vertical: you can still hit something directly below you with any velocity at all (gravity does all the work) or directly above you if you fire straight up with speed >= sqrt(-g*target.y/2) If it's to the left or right at all then we can't hit it with this angle.





  • if target.y >= target.x * tan(angle): the target is too high. The parabola kisses the line y = x * tan(angle) only at the moment it's fired - from then on gravity pulls it downward, so if the target is above that line then we'll never hit it even with infinite speed.






I've written a few previous answers on the subject of planning ballistic trajectories - you might also find those useful for reference:



Wednesday, March 22, 2017

xna - How do I make a jumping dolphin rotate realistically?


I want to program a dolphin that jumps and rotates like a real dolphin. Jumping is not the problem, but I don't know how to make the rotation. At the moment, my dolphin rotates a little weird. But I want that it rotates like a real dolphin does.


How can I improve the rotation?


public class Game1 : Microsoft.Xna.Framework.Game
{

GraphicsDeviceManager graphics;
SpriteBatch spriteBatch;
Texture2D image, water;
float Gravity = 5.0F;
float Acceleration = 20.0F;
Vector2 Position = new Vector2(1200,720);
Vector2 Velocity;
float rotation = 0;
SpriteEffects flip;
Vector2 Speed = new Vector2(0, 0);


public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = "Content";
graphics.PreferredBackBufferWidth = 1280;
graphics.PreferredBackBufferHeight = 720;
}

protected override void Initialize()

{
base.Initialize();
}

protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
image = Content.Load("cartoondolphin");
water = Content.Load("background");
flip = SpriteEffects.None;

}

protected override void Update(GameTime gameTime)
{
float VelocityX = 0f;
float VelocityY = 0f;

float time = (float)gameTime.ElapsedGameTime.TotalSeconds;
KeyboardState kbState = Keyboard.GetState();
if(kbState.IsKeyDown(Keys.Left))

{
rotation = 0;
flip = SpriteEffects.None;
VelocityX += -5f;
}

if(kbState.IsKeyDown(Keys.Right))
{
rotation = 0;
flip = SpriteEffects.FlipHorizontally;

VelocityX += 5f;
}

// jump if the dolphin is under water
if(Position.Y >= 670)
{
if (kbState.IsKeyDown(Keys.A))
{
if (flip == SpriteEffects.None)
{

rotation += 0.01f;
VelocityY += 40f;
}
else
{
rotation -= 0.01f;
VelocityY += 40f;
}
}
}

else
{
if (flip == SpriteEffects.None)
{
rotation -= 0.01f;
VelocityY += -10f;
}
else
{
rotation += 0.01f;

VelocityY += -10f;
}
}

float deltaY = 0;
float deltaX = 0;

deltaY = Gravity * (float)gameTime.ElapsedGameTime.TotalSeconds;

deltaX += VelocityX * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration;

deltaY += -VelocityY * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration;

Speed = new Vector2(Speed.X + deltaX, Speed.Y + deltaY);
Position += Speed * (float)gameTime.ElapsedGameTime.TotalSeconds;
Velocity.X = 0;

if (Position.Y + image.Height/2 > graphics.PreferredBackBufferHeight)
Position.Y = graphics.PreferredBackBufferHeight - image.Height/2;

base.Update(gameTime);

}

protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin();
spriteBatch.Draw(water, new Rectangle(0, graphics.PreferredBackBufferHeight -100, graphics.PreferredBackBufferWidth, 100), Color.White);
spriteBatch.Draw(image, Position, null, Color.White, rotation, new Vector2(image.Width / 2, image.Height / 2), 1, flip, 1);
spriteBatch.End();


base.Draw(gameTime);
}
}



The code now works almost perfectly. I introduced the bool variable direction and made some tests with the angle. Jumping works now as it should. But I couldn't change two things:


1)At the beginning(you can see it in the video), the dolphin always looks up. I want that it looks at the right side, if possible. I tried to change the rotation but it didn't worked.


2)The second thing is a little bit strange. If I change the direction from left to right or right to left, the dolphin is rotated in the wrong direction during a short moment. How can I fix that? I made a new video: http://www.myvideo.de/watch/8881567


    public class Game1 : Microsoft.Xna.Framework.Game
{

GraphicsDeviceManager graphics;
SpriteBatch spriteBatch;
Texture2D image, water;
float Gravity = 5.0F;
float Acceleration = 20.0F;
Vector2 Position = new Vector2(1200,720);
Vector2 Velocity;
float rotation = 0;
SpriteEffects flip;
Vector2 Speed = new Vector2(0, 0);

bool direction = false;
Vector2 prevPos;

public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = "Content";
graphics.PreferredBackBufferWidth = 1280;
graphics.PreferredBackBufferHeight = 720;
}


protected override void Initialize()
{
base.Initialize();
}

protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
image = Content.Load("cartoondolphin");

water = Content.Load("background");
flip = SpriteEffects.None;
}

protected override void Update(GameTime gameTime)
{
float VelocityX = 0f;
float VelocityY = 0f;

float time = (float)gameTime.ElapsedGameTime.TotalSeconds;

KeyboardState kbState = Keyboard.GetState();
if(kbState.IsKeyDown(Keys.Left))
{
flip = SpriteEffects.FlipHorizontally;
VelocityX += -5f;
direction = false;
}

if(kbState.IsKeyDown(Keys.Right))
{

flip = SpriteEffects.None;
VelocityX += 5f;
direction = true;
}

if (direction == false)
{
rotation = -((float)Math.Atan2(Position.X - prevPos.X, Position.Y - prevPos.Y) + MathHelper.PiOver2);
prevPos = Position;
}

if (direction == true)
{
rotation = -((float)Math.Atan2(Position.X - prevPos.X, Position.Y - prevPos.Y) + MathHelper.Pi + MathHelper.PiOver2);
prevPos = Position;
}

// jump if the dolphin is under water
if(Position.Y >= 670)
{
if (kbState.IsKeyDown(Keys.A))

{
if (flip == SpriteEffects.None)
{
VelocityY += 40f;
}
else
{
VelocityY += 40f;
}
}

}
else
{
if (flip == SpriteEffects.None)
{
VelocityY += -10f;
}
else
{
VelocityY += -10f;

}
}

float deltaY = 0;
float deltaX = 0;

deltaY = Gravity * (float)gameTime.ElapsedGameTime.TotalSeconds;

deltaX += VelocityX * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration;
deltaY += -VelocityY * (float)gameTime.ElapsedGameTime.TotalSeconds * Acceleration;


Speed = new Vector2(Speed.X + deltaX, Speed.Y + deltaY);

Position += Speed * (float)gameTime.ElapsedGameTime.TotalSeconds;

Velocity.X = 0;

if (Position.Y + image.Height/2 > graphics.PreferredBackBufferHeight)
Position.Y = graphics.PreferredBackBufferHeight - image.Height/2;


base.Update(gameTime);
}

protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin();
spriteBatch.Draw(water, new Rectangle(0, graphics.PreferredBackBufferHeight -100, graphics.PreferredBackBufferWidth, 100), Color.White);
spriteBatch.Draw(image, Position, null, Color.White, rotation, new Vector2(image.Width / 2, image.Height / 2), 1, flip, 1);
spriteBatch.End();


base.Draw(gameTime);
}
}

Answer




A jumping dolphin's rotation is related to the direction it's moving.


dolphin trajectories


The blue is water.
The dots are dolphin positions at each time tick.

The arrows are the direction the dolphin is rotated to face.


That bottom one is a radioactive dolphin with gravity powers.



You could get the same effect with something like this:


function dolphin.onPositionChange ()
dolphin.sprite.rotation = directionFromTo(dolphin.prevPos, dolphin.pos)
prevPos = pos
end

That's only illustrative pseudocode of course. Here's what it does in English:




Every time the dolphin's position changes
The angle from its previous position to new position becomes its rotation
The current position is saved as the old one, for use again next step

Finding the direction from point A to point B is a little bit of geometry. XNA might have a function that does this for you. If it does, use that!



If it doesn't (or if you just want to understand what's going on), here's the geometry for you:


geometry calculation of angle given opposite and adjacent


Love2D (my favourite game framework) doesn't have a function to calculate the direction from point to point. Here's how I made one:



-- Vector direction
-- Angle in radians, clockwise from up i.e. straight up is 0, right is
-- pi/2, down is pi, etc.
function directionFromTo(a,b)
return math.atan2( (b.x - a.x) , (b.y - a.y) )
end

-- Test; should print 135
a = {x = 0, y = 0}
b = {x = 1, y = -1}

print( directionFromTo(a,b) / math.pi*180) -- (division to convert to degrees)

Simple past, Present perfect Past perfect

Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...