Monday, February 29, 2016

grammaticality - "We plan" versus "we are planning"


Is there a semantic difference between the following two sentences?



In the future, we are planning to migrate our tool to the Z3 solver.




In the future, we plan to migrate our tool to the Z3 solver.




Answer




Yes, their meanings are the same but there is a slight difference in use.


1) A fixed arrangement in the near future is very often expressed by the present continuous tense:



In the future, we are planning to migrate our tool to the Z3 solver.



The time expression: in the future informs us of when the action will occur. You can substitute those words with any adverb such as: tomorrow, next week, in January etc.. We often use the present continuous tense when we are talking about appointments, dates, plans and programmes. think of it as being an event which we have "pencilled" in our diary or calendar. Note that the time must be mentioned otherwise the reader or listener may confuse the future meaning with the present.


2) The simple present is also used for a definite future arrangement.



In the future, we plan to migrate our tool to the Z3 solver.




However, it is more impersonal and formal than the continuous. Compare the following sentences:



I'm leaving tonight = implies it is my decision, I have made the arrangements myself.



and



I leave tonight = could imply that the decision was not made by me but by my company or the manager I work for.



So back to your original question, may I suggest that the second sentence in the simple present tense i.e.In the future, we plan to migrate our tool to the Z3 solver is preferred and would sound more natural coming from an enterprise/company/business source.


punctuation - A comma before "because" in a sentence like "This effect happens(,) because of this cause."


In this sentence, should there be a comma before because:



A file was excluded because it cannot be updated.



Part of me thinks cause and effect should have a comma.




This effect happens, because of this cause.



However, this would also be a defining clause. See: Comma before "because"




scripting - Why do we use scripts in development?


In my current project, Lua scripts are called by the C++ functions on the server side. After that, the scripts again call the C++ functions still in that solution. Why should we do such things and not call the C++ function directly? What are the situations in which scripts are needed?





Sunday, February 28, 2016

articles - "Drug levels in the blood" vs "drug levels in blood"



The resulting molecule has a prolonged therapeutic effect even after a single-dose administration and a lower risk of adverse effects associated with fluctuations of drug levels in blood.



and




The resulting molecule has a prolonged therapeutic effect even after a single-dose administration and a lower risk of adverse effects associated with fluctuations of drug levels in the blood.



Do these sentences mean the same, or does the second imply that some particular blood is being mentioned?


I wrote "in the blood", but another translator said he would write "in blood". The ngram is as follows:


enter image description here


Would "in the blood" be understood as "in the blood of the patient who took the drug" and thus be for 99% of intents and purposes equal in meaning with option 1?



Answer



I didn't want to offer an answer that just went by my subjective opinion given your specific question and purpose, so I looked around the Internet for any references or style guides for medicine or medical transcription.


The most relevant one I was able to find was this document about Using Articles in Medical Writing from the English Language Unit of the Health Sciences Centre in Kuwait. That document has the following recommendations when it comes to describing blood: Usage of 'blood' with the definite article


I think your usage would be the third case shown above - you'd need to have previously set the context that "blood" refers to the blood of a patient who has ingested a drug, and since that context has been set, you can thereafter refer to it as "the blood" to make it clear you're referring to this specific blood sample (and not anyone's blood in general).



Using "in blood" is more general in that it refers to a typical sample of blood and not a specific sample of blood. However since you're referring to a specific subject's blood sample, it would seem more appropriate to use "in the blood" (after having set the context, as explained above).


I trust this offers the clarity you're looking for.


As a general note to all, I found that the English Language Unit mentioned earlier provides useful English resources specific to medicine on grammar, vocabulary, etc.


meaning - What are the differences between "would you believe it?" & "don't you believe it!"?


In dictionary,



"don’t you believe it!" (informal) used to tell somebody that something is definitely not true. ‘She wouldn’t do a thing like that.’ ‘Don’t you believe it!’


would you believe (it)? (informal) used to show that you are surprised and annoyed about something. And, would you believe, he didn't even apologize!


I don’t believe it! (informal) used to say that you are surprised or annoyed about something. I don't believe it! What are you doing here?



So, it seems that "would you believe (it)?"="I don’t believe it!"



But I am not sure what "would" in "would you believe (it)?" means? or it is just a way that native speakers say without any particular meaning?



Answer




But I am not sure what "would" in "would you believe (it)?" means? or it is just a way that native speakers say without any particular meaning?



Would in "Would you believe it?" is a modal verb. I'll try to explain the meaning using this example:



Would you believe it if I told you that he did not even apologize!



See? It's a rhetorical question. The speaker is implying that the listener would not believe it if he were told it.



Thus, would is a modal verb similar to the would used here:



(said in an internet chatroom) "If I told you that I live in Russia, would you believe it?"



It's only that with "Would you believe it?!" you are not really asking, you are saying that "no normal person would believe that" or that "every normal person would be suprised if it were true".





So, it seems that "would you believe (it)?"="I don’t believe it!"



They are somewhat similar, but not precisely. Imagine you are alone in your flat. You walk into the bathroom and see that your washing machine is leaking water on the floor. You will be more likely to say




I don't believe it! I only bought it a month ago!



Instead of



Would you believe it? I only bought it a month ago!



.. because there is nobody around to address this rhetorical question to.


physics - Determing an object's position along a curve over time


I have some objects in my game which are "thrown". At the moment I am trying to implement this by having these objects follow a parabolic curve. I know the start point, the end point, the vertex and the speed of the object.




  1. How can I determine at any given time or frame what the x & y co-ordinates are?

  2. Is a parabolic curve even the right curve to be using?



Answer



What your looking for a parametric plot of the parabolic function. It's easiest to make the parametric function use a range of p ∈ [0,1].


The canonical form for a parametric parabola is



k := some constant
f_x(p) = 2kp

f_y(p) = kp²



Using this formula and some basic algebra for function morphing and I got



p ∈ [0,1] → x,y ∈ [0,1]
or in other words keep p between 0 and 1 and x,y will be between 0 and 1 as well.
x = p
y = 4p - 4p²



So to get these functions will produce the numbers you're looking for.



float total_time = 2;
float x_min = 0;
float x_max = 480;
float y_min = 0;
float y_max = 320;

float f_x( float time )
{
float p = time/total_time;
return x_min + (x_max-x_min)*p;

}

float f_y( float time )
{
float p = time/total_time;
return y_min + (y_max-y_min)*(4*p-4*p*p);
}

headlinese - Lack of Articles in Headlines


Are there any rules as to an article before a noun in the singular in headlines please, i.e. lack of an article?



Description of Type of Investment





directx - How do I create a decal system?


I'm currently given the task to design & create a simple decal-system in C++/DirectX.


Does anybody know a great tutorial, article or paper to start with? (Especially the part about 2D-texture to 3D-surface projection and clipping isn't entierly obvious to me.)


I'd also appreciate just simple hints and best-practices.



Answer



An example of what you might be looking for:



These forum topics should help you get started:




I am in the middle of writing my decal system (C#, XNA), and it is going pretty well. So let me know if you need some help. Here is a video of it in action: http://www.twitch.tv/battlekiller/b/308505930 . (The decal is the yellow spell circle following the mouse.)


writing - How to speak as a native speaker?


I'm originally from central Europe, currently living second year in the UK. I work in an IT company surrounded only by native speakers.


I can feel that speed I learn English rapidly decreased since I moved here. Yes, I can express myself, I can work here, but my ability to speak in English is around 20% of my mother tongue. Passive knowledge is not problem, but active is quite pain.


Is here some way how can I speak lets say in 10 years as a native speaker ?


Yes, it sounds silly, but please try to think about that. I just don't believe there is no way and I should just wait until it fix itself.


I make really simple mistakes



  • Not using a/an

  • Using he instead of she

  • Using sleepover instead of oversleep


  • Wrong usage of have/has

  • Wrong usage of (have/has/had) been


What I have tried so far



  • English classes - It is quite hard to pay attention after work. Focusing on topics I don't need

  • Text books - I just don't like it

  • I read a and watch a lot of English videos, but does it really helps ?


Ideas I have




  • f2f English class just for me, not in a classroom - I can focus on topics I really need

  • Start blogging and have each post checked by someone before publishing - I can learn from my mistakes

  • Socialising - I'm introvert in any language, but maybe I should try harder




Saturday, February 27, 2016

game maker - Drawing only objects that are currently in viewport [Gamemaker]


Following on from my previous question about optimising large object counts in Gamemaker, I am looking for a method for the solution proposed by liggiorgio.


My game has large numbers of small, sprite-based objects, represent individual strands or clumps of fur. For a variety of reasons, they all need to be interactive objects. At the moment, I am reaching limits on how many I can generate without the game grinding to a halt.


As the game involves a very large (20,000px x 20,000px) room with a restricted viewport, liggiorgio proposed a solution using the built-in Application Surface to only draw objects which are currently within the viewport. This would mean that the vast majority of the room, and its objects, would not be drawn, and hopefully framerates should improve.


Can anybody help me with implementing this?


Update: I am now using this code in an object's Draw event as a test:



if (x > view_xview[0] and x < (view_xview[0] + view_wport[0]))
and (y > view_yview[0] and y < (view_yview[0] + view_hport[0]))
{
draw_self()
} else {
}

However, even when I pan the view over to the affected object, it is not drawn.


Edit: Thanks, liggiorgio, for your below solution. This code works well for drawing and not drawing based on view in the room, as far as I can tell. However, not-drawing the large number of oFur objects does not seem to help with my performance issues. I think that deactivating those not currently in view is the best option.


I know that deactivating instances can be risky, but I don't think it will be a problem for me:




  • I only have one room in my game, so there is no worry about persistent objects not being carried over;

  • At no point will I be trying to delete all of the oFur objects at once;

  • I have created a FurDeactivation control object, so I won't fall into the trap of having the oFur modules try to deactivate themselves and then therefore be unable to reactivate themselves when they are in the view. I made this mistake at first, but I've now corrected it with the FurDeactivation control object.


This is the script that is running in the Step event of the control object. The ObjectType is oFur;


///DeactivateIfOutOfFrame(ObjectType)

ObjectType = argument0


with ObjectType {
if ((x > (view_xview[0]-sprite_width/2)) && (y > (view_yview[0]-sprite_height/2)) && (x < (view_xview[0]+view_wview[0]+sprite_width/2)) && (y < (view_yview[0]+view_hview[0]+sprite_height/2)) )
{
instance_activate_object(self)
} else {
instance_deactivate_object(self)
}
}

At the moment, it does not seem to be deactivating any of the instances of oFur outside the view. Any ideas?




Answer



Following on from my edit above, I have worked out how to deactivate the oFur objects and reactivate them only when they are in the current viewport. While the game runs slowly (considering there are ~400,000 oFur objects with about 3000 on screen at any one time) it now, at least, runs. The next steps will be:




  • making each oFur object larger, to cover more of the screen at a time;




  • reduce the number of oFur objects to improve performance:





  • Perhaps make the room itself smaller, to lower the count needed;




  • find a way to generate them more quickly (as the method detailed below takes about 30 seconds to generate all of them).




Solution


An invisible block, mFurBlock, is generated in the top left of the room, and 1,000 oFurs are generated in a random fashion across it. This will serve as the 'seed' block, from which all others are created. If I was to try to generate the fur across the entire room, it would take nearly two hours.


Instead, I copy each oFur in this mFurBlock across the room a number of times corresponding to (WidthOfRoom / WidthOfBlock), which in this case is `20,000 / 1000', which leaves me with 20 'blocks' of fur.


Each time the fur is copied, its x is increased by WidthOfBlock * NumberOfCopiesMade to gradually move them across the room. Therefore, if 4 copies had been made, the next copies would be moved 4000 pixels across the room. This leaves me with an initial row of fur across the top of the room.



The fur block is then copied down the room, and a new row made using the same code. This leads to a 'scanning' effect across the room, with new furs being created and positions in blocks of 1000px x 1000px, row by row.


The key is to, after each copy has been made and positioned, to deactivate those copies. This means that there are never too many instances active (in fact, only the 'seed' block and the current copied block) and the game won't crash. The framerate does drop, but we could hide this behind a static loading screen or something similar. Or, you know, just optimise it.


When the entire room is finished, the 'only active when in viewport' code is activated in a separate 'Control' object, and persists for the rest of the game:


if global.WholeFieldGenerated = true {
if DeactivateToggled = false {
instance_deactivate_object(oFur)
instance_activate_region(view_xview[0], view_yview[0], view_wview[0], view_hview[0], true)
} else if DeactivateToggled = true {
instance_deactivate_object(oFur)
}


This deactivates all oFurs every step, and then activates any deactivated objects in the room that are currently within view.


Below is my commented generation script, so you can see exactly how I did it.


It needs work, but it at least works!


//If the first fur block has been completed, and finished generating...
if CreatedFirstFurBlock = true and mFurBlock.FinishedGenerating = true {

/* CREATING THE FIRST ROW */
if FirstRowCompleted = false {


if NumberOfFurBlocksGenerated < global.TotalDuplicationsPerRow {
with oFurBlockFur {
//If a fur piece is within the FurBlock (i.e. it is in the 'root' block)...
if x <= mFurBlock.sprite_width {
//...Copy it and move it along the row the number of 'spaces' required.
ThisCopy = instance_copy(true)
ThisCopy.x = x + (mFurBlock.sprite_width * mFurGeneration.NumberOfFurBlocksGenerated)
//Deactivate the copy, hiding it from view now that it is positioned. This will not effect the 'root' block, meaning that we can use it again.
instance_deactivate_object(ThisCopy)
}

}
//Increase the number of fur blocks generated.
NumberOfFurBlocksGenerated = NumberOfFurBlocksGenerated + 1
} else if NumberOfFurBlocksGenerated >= global.TotalDuplicationsPerRow {
FirstRowCompleted = true
NumberOfFurBlocksGenerated = 1
}
/* CREATING SUBSEQUENT ROWS */

} else if FirstRowCompleted = true {


if NumberOfRowsGenerated < global.TotalRows {

if RowDone = true {
with oFurBlockFur {
//If the fur is part of our 'root block'...
if y <= mFurBlock.sprite_height and x <= mFurBlock.sprite_width {
//...Copy it and move it down the number of 'rows' required. Mark it as in the root so
//that we can deactivate it later.
ThisCopyRoot = instance_copy(true)

ThisCopyRoot.InRoot = true
ThisCopyRoot.y = y + (mFurBlock.sprite_height * mFurGeneration.NumberOfRowsGenerated)
//We don't deactivate the copy, as we need it to generate the row. However, we won't deactivate the 'root' block either, as it is useful to use
//as a generator for subsequent rows. This will only increase the number of 'blocks' active by 1, as each row is deactivated each time they have finished
//generating.
}
}
RowDone = false
ThisRowBlocksGenerated = 1
}


//Reset the row done function. This allows us to get on with making this new row.



if RowDone = false {
if ThisRowBlocksGenerated < global.TotalDuplicationsPerRow {
with oFurBlockFur {
//If a fur piece is in the 'root block' of the current row...
if y >= (mFurBlock.sprite_height * mFurGeneration.NumberOfRowsGenerated) {

//Copy and move it along, as before.
ThisCopy = instance_copy(true)
ThisCopy.x = x + (mFurBlock.sprite_width * mFurGeneration.ThisRowBlocksGenerated)
instance_deactivate_object(ThisCopy)
}
}
//Update the block count in this row.
ThisRowBlocksGenerated = ThisRowBlocksGenerated + 1

//When we have finished this row, update the number of rows, set the row as done (which allows a new 'root' block to be generated next step)

//and deactivate the 'root' block for this row. Can this last thing be done? I'm not sure in this current setup.
} else if mFurGeneration.ThisRowBlocksGenerated >= global.TotalDuplicationsPerRow {
NumberOfRowsGenerated = NumberOfRowsGenerated + 1
with oFurBlockFur {
if InRoot = true {
instance_deactivate_object(self)
}
}
RowDone = true
}

}

//We now replay this code, moving the 'root block' down via copy, using that to generate a new row, and then deactivating the entire row.


//Once all the rows are generated and deactivated, we can tell the ObjectViewDeactivator that it is ready to shine. This Generation object can then be deleted.
} else if NumberOfRowsGenerated >= global.TotalRows {
NumberOfFurBlocksGenerated = 0
global.WholeFieldGenerated = true
instance_destroy()

}

}
}

meaning - Isn't this sentence from Weathering grammatically wrong?



She was meant to stay indoors but everything looked varnished and bright after the rain, so she put her coat on and went outside, then came back in and slung the camera over her shoulder. Through the sopping grass and down towards the river. It was wide and brown today, and it rippled and churned. There were deep creases when it went round rocks and a hollow, clunking noise. It looked strong, like a muscle. When she threw in a stick, the stick didn't float on the surface – it got dragged under, as if something had reached up to grab it. She walked along the bank and there was the bridge she'd seen in some of the photos – it had rusty railings and a broken plank in the middle.


Source: Weathering by Lucy Wood, p. 83



I don't understand the meaning of this sentence : there were deep creases when it went round rocks and a hollow, clunking noise.



First, does this 'it went round rocks' mean river flows around a patch of rocks? Second, isn't 'a hollow, clunking noise' grammatically wrong?



Answer



Native speakers also find that sentence clumsy, confusing, or even ungrammatical. Perhaps the author made it that way deliberately, for artistic reasons, or perhaps it was just some sloppy writing that the copyeditor didn't fix.


Here's my attempt to rewrite it to make it clearer:



There were deep creases in the river's surface where it went round rocks, and there was a hollow, clunking noise.



Yes, the original sentence means that the river flowed around a patch of rocks.


The author wrote when to introduce the place where the deep creases were. Normally we would say where. It's not unusual in English to swap words for time with words for space, but in this sentence it's jarring.


When I first read the sentence, I thought it was ungrammatical because I couldn't find a verb that said anything about a hollow, clunking noise. At first, "a hollow, clunking noise" appears to be a second object of "went round", but that doesn't make sense: a river can't "go round" a noise.* Later, I noticed that the sentence as a whole is structured like this one:




There were three Queens and a Jack.



"Three Queens" corresponds to "deep creases when it went round rocks." "A Jack" corresponds to "a hollow, clunking noise". When the sentence is this short, it's easy to see that "three Queens" and "a Jack" are both subjects of the verb "were". In the original sentence, a reader tends to see "deep creases" as the whole subject of "were": "deep creases" has sort of "used up" the verb "were" in the reader's mind. People don't think about this consciously when reading, of course, but the result is that the reader is likely to feel lost upon reaching "hollow, clunking noise". The feeling of disorientation happens because that phrase doesn't seem connected to a verb.


That isn't wrong, it's just unnecessarily confusing. In my rewrite, I added there was to make the sentence easier to follow. In the original, the plural "were" led the reader not to connect it with "a hollow, clunking noise" so many words later, after a plural subject ("creases"). The singular "was" agrees with the singular "hollow, clunking noise", so the reader never gets lost. (This shows how there is more to English grammar than precise rules, and there are no exact boundaries between grammar, style, and clarity.)




*By the way, people seldom use "round" as a preposition in the United States, so Americans might be more likely to judge the sentence ungrammatical. I just figure that the author is likely to be British.


"You're using THE wrong formula" vs. "You're using A wrong formula": choosing between the definite and the indefinite article


I'm trying to conceive the difference between the use of a definite article and an indefinite article here.




You are using a wrong formula (for the math problem).


You are using the wrong formula (for the math problem).



What's the difference if any?




Thursday, February 25, 2016

meaning - including but not limited to - explain this sentence



You will not be permitted to bring any personal items to the test centre, including but not limited to wrist-watch, cellphones, calculators, etc.




I think it means a candidate will not be permitted to bring any items. Is this right or wrong? Please explain with some examples.




maps - Procedural terrain generation in cylindrical (2D) world


I'm fairly new to procedural terrain generation. I know that to generate terrain you would use different calculations with the x and y axis (also z if you have one).


But what if you want to generated random terrain on a 2D world that loops around itself? By that I mean a world in which you enter the right side upon leaving the left side (or the other way around). It should also be generated on the fly and not all at once.


One solution I can think of is to just generate ocean along the border, but that is not what I'm looking for. I also seem to remember that Civilization generated such types of worlds, but I'm not quite sure how that worked, it's been quite some time since I played that game.


So how exactly can one generate such a continuous world?



Answer



As mentioned in the comment, I figured what you required was wrappable noise. With wrappable noise, you can generate seamless textures and game worlds. You are probably familiar with generating 2D noise already.


What you need to do is sample 3 dimensional noise instead. Instead of sampling a rectangular shaped noise pattern, you are going to sample a cylindrical shaped pattern in 3D space.



enter image description here


The image above is what we will be doing. The idea is that you can cut this cylinder lengthwise at any point, and unroll it, and then use this as your map data. This will effectively wrap your noise around on 1 axis.


In order to do this, sample your noise data like so:


for (var x = 0; x < Width; x++) {
for (var y = 0; y < Height; y++) {

// Sample noise at smaller intervals
float s = x / (float)Width;
float t = y / (float)Height;


// Calculate our 3D coordinates
float nx = Mathf.Cos (s * 2 * Mathf.PI) / (2 * Mathf.PI);
float ny = Mathf.Sin (s * 2 * Mathf.PI) / (2 * Mathf.PI);
float nz = t;

// sample noise at coordinate
float heightValue = (float)HeightMap.Get (nx, ny, nz);

// save noise value to a noise map
mapData.Data [x, y] = heightValue;

}
}

If you convert this map data into a texture it would wrap on the x-axis and would look something like:


enter image description here


More information can be found here.


How to use "resulting" in the middle of the sentence?




The tap is leaking. The water is wasted.


____________________ resulting
___________________________.



Is this correct?


The leaking tap resulting the wastage of water.



Answer



My best guess: The tap is leaking, resulting in water waste.



  • NOTE: I guess that it's tap (as in a water tap) rather than tape. The tape in your sentence could refer to some kind of seal tape, but if it really was seal tape, I'd expect the verb to be "torn off" or "broken", rather than "leaking".



The main point of your exercise seems to be about joining sentences by turning one of them into a participle clause. Because you're forced to use resulting, the exercise seems to want you to know the phrasal verb result in.


The main pattern of result in is result in something, which means that you have to turn the second sentence into a noun; which you did correctly as wastage of water, though I think waste is enough (wastage sounds a little too formal for me). You could use either a waste of water or simply use (like I choose to use) water waste.


We now have a good enough background to convert the two sentences. Let's do it!



Original: The tap is leaking. The water is wasted.


Use 'result in', turn the idea into a noun:
The tap is leaking. This results in water waste.


Join the two sentences by turning result in into resulting in (a participle):
The tap is leaking, resulting in water waste.






NOTE: The leaking tap resulting (the) wastage of water is not a sentence, even with resulting in:



Incorrect: The leaking tap resulting in wastage of water. <-- DON'T USE IT!



Why? Because the auxiliary verb is missing! If you want to phrase it that way, you need at least this:


The leaking tap is resulting in wastage of water.


Wednesday, February 24, 2016

physics - implementing magnet like system in unity


I'm implementing magnet like behavior in unity3d.
the red object have a script that add force to it toward a blue object.
to do that i use this code:


void FixedUpdate()
{

float distance = Vector2.Distance(blue.transform.position, transform.position);
speed = MAX_DISTANCE - distance;
body2d.AddForce((blue.transform.position - transform.position).normalized * speed);
}

MAX_DISTANCE is maximum distance that red and blue object can have.


with this code i got this behavior :


image


but i don't want red object gets this far after reaching blue object i want red object to decrease it's speed and after maybe some shaking around blue object and eventually stop on blue object, how can i implement this?


UPDATE

red and blue objects can't collide with each other they just know their location.



Answer



You really shouldn't redefine physics, you already have Physics2D. What you need is a PointEffector2D on the blue object and a Rigidbody2D on the red object. Your red object also needs to have its drag and angularDrag set to something other than zero for a realistic effect. You can skip the angularDrag and leave it at zero but you might need it later on.


Since the object will get to a point where the distance is almost zero, you'll have (almost) infinite force at that point. In the following gif, the "blue" object has a PointEffector2D with the force -30 and a CircleCollider2D for the effector to use. The "red" ball on the outside has a Rigidbody2D with a mass of 1 and a drag of 0.25.


enter image description here


Here are the details of the objects (as I already had them at hand, I'm just putting them here for future reference). Be aware that you might or might not need a Collider2D on the "red" object, I'm not entirely sure.


enter image description here


So, when the object gets near enough to the point you want it to be (and when it's slow enough to stop), remove its Rigidbody2D, or just disable it, and set its position. You can also increase the drag your object has to a point where it'll stand still. This way you're ensuring that the object won't wobble. For example:


if(distanceBetweenObjects <= 0.1 && redObject.velocity < 1) {
redObject.Rigidbody2D.enabled = false;

redObject.transform.position = blueObject.transform.position;
}

You'll have to tweak the values and maybe the code but this is close enough.


exceptions - NullReferenceException in Unity


Since many users are facing the NullReferenceException: Object reference not set to an instance of an object error in Unity, I thought that it would be a good idea to gather from multiple source some explanation and ways to fix this error.




Symptoms


I am getting the error below appearing in my console, what does it mean and how do I fix it?



NullReferenceException: Object reference not set to an instance of an object




Answer



Value type vs Reference type



In many programming languages, variables have what is called a "data type". The two primary data types are value types (int, float, bool, char, struct, ...) and reference type (instance of classes). While value types contains the value itself, references contains a memory address pointing to a portion of memory allocated to contain a set of values (similar to C/C++).


For example, Vector3 is a value type (a struct containing the coordinates and some functions) while components attached to your GameObject (including your custom scripts inheriting from MonoBehaviour) are reference type.


When can I have a NullReferenceException?


NullReferenceException are thrown when you try to access a reference variable that isn't referencing any object, hence it is null (memory address is pointing to 0).


Some common places a NullReferenceException will be raised:


Manipulating a GameObject / Component that has not been specified in the inspector


// t is a reference to a Transform.
public Transform t ;

private void Awake()

{
// If you do not assign something to t
// (either from the Inspector or using GetComponent), t is null!
t.Translate();
}

Retrieving a component that isn't attached to the GameObject and then, trying to manipulate it:


private void Awake ()
{
// Here, you try to get the Collider component attached to your gameobject

Collider collider = gameObject.GetComponent();

// But, if you haven't any collider attached to your gameobject,
// GetComponent won't find it and will return null, and you will get the exception.
collider.enabled = false ;
}

Accessing a GameObject that doesn't exist:


private void Start()
{

// Here, you try to get a gameobject in your scene
GameObject myGameObject = GameObject.Find("AGameObjectThatDoesntExist");

// If no object with the EXACT name "AGameObjectThatDoesntExist" exist in your scene,
// GameObject.Find will return null, and you will get the exception.
myGameObject.name = "NullReferenceException";
}

Note: Be carefull, GameObject.Find, GameObject.FindWithTag, GameObject.FindObjectOfType only return gameObjects that are enabled in the hierarchy when the function is called.


Trying to use the result of a getter that's returning null:



var fov = Camera.main.fieldOfView;
// main is null if no enabled cameras in the scene have the "MainCamera" tag.

var selection = EventSystem.current.firstSelectedGameObject;
// current is null if there's no active EventSystem in the scene.

var target = RenderTexture.active.width;
// active is null if the game is currently rendering straight to the window, not to a texture.

Accessing an element of a non-initialized array



private GameObject[] myObjects ; // Uninitialized array

private void Start()
{
for( int i = 0 ; i < myObjects.Length ; ++i )
Debug.Log( myObjects[i].name ) ;
}

Less common, but annoying if you don't know it about C# delegates:


delegate double MathAction(double num);


// Regular method that matches signature:
static double Double(double input)
{
return input * 2;
}

private void Awake()
{
MathAction ma ;


// Because you haven't "assigned" any method to the delegate,
// you will have a NullReferenceException
ma(1) ;

ma = Double ;

// Here, the delegate "contains" the Double method and
// won't throw an exception
ma(1) ;

}

How to fix ?


If you have understood the previous paragraphes, you know how to fix the error: make sure your variable is referencing (pointing to) an instance of a class (or containing at least one function for delegates).


Easier said than done? Yes, indeed. Here are some tips to avoid and identify the problem.


The "dirty" way : The try & catch method :


Collider collider = gameObject.GetComponent();

try
{

collider.enabled = false ;
}
catch (System.NullReferenceException exception) {
Debug.LogError("Oops, there is no collider attached", this) ;
}

The "cleaner" way (IMHO) : The check


Collider collider = gameObject.GetComponent();

if(collider != null)

{
// You can safely manipulate the collider here
collider.enabled = false;
}
else
{
Debug.LogError("Oops, there is no collider attached", this) ;
}




When facing an error you can't solve, it's always a good idea to find the cause of the problem. If you are "lazy" (or if the problem can be solved easily), use Debug.Log to show on the console information which will help you identify what could cause the problem. A more complex way is to use the Breakpoints and the Debugger of your IDE.


Using Debug.Log is quite useful to determine which function is called first for example. Especially if you have a function responsible for initializing fields. But don't forget to remove those Debug.Log to avoid cluttering your console (and for performance reasons).


Another advice, don't hesitate to "cut" your function calls and add Debug.Log to make some checks.


Instead of :


 GameObject.Find("MyObject").GetComponent().value = "foo" ;

Do this to check if every references are set :


GameObject myObject = GameObject.Find("MyObject") ;

Debug.Log( myObject ) ;


MySuperComponent superComponent = myObject.GetComponent() ;

Debug.Log( superComponent ) ;

superComponent.value = "foo" ;

Even better :


GameObject myObject = GameObject.Find("MyObject") ;


if( myObject != null )
{
MySuperComponent superComponent = myObject.GetComponent() ;
if( superComponent != null )
{
superComponent.value = "foo" ;
}
else
{
Debug.Log("No SuperComponent found onMyObject!");

}
}
else
{
Debug.Log("Can't find MyObject!", this ) ;
}

Sources:



  1. http://answers.unity3d.com/questions/47830/what-is-a-null-reference-exception-in-unity.html


  2. https://stackoverflow.com/questions/218384/what-is-a-nullpointerexception-and-how-do-i-fix-it/218510#218510

  3. https://support.unity3d.com/hc/en-us/articles/206369473-NullReferenceException

  4. https://unity3d.com/fr/learn/tutorials/topics/scripting/data-types


c++ - Should an object in a 2D game render itself?


I'm making a 2D street fighter-like game that is not tile based. Usually people recommend that entities be given to a renderer that render them, not them render themselves, but it seems the inverse is better,


Why is one better over the other?


Thanks



Answer




An couple of considerations :




  • as you mentionned, each sprite would have to "hint" about which bitmap to use, but if the entity has to render itself. What would that 'hint' be ? If it is a reference to a different bitmap, sprite sheet, etc... for each sprite, then you might end up using more memory than necessary, or having troubles managing that memory. An advantage of a separate renderer is that you have only one class responsible for any asset management. That said, in a SF2-like fighting game, you might only have two sprites ;)




  • as mentionned elsewhere, whenever you want to change your graphical API, you have to change the code for all your sprites.




  • rendering is rarely done whithout a reference to some graphical context. So either there is a global variable that represent this concept, or each sprite has an interface with render(GraphicalContext ctx). This mixes the graphical API and the logic of your game (which some people will find unelegant), and might cause compilation issues.





  • I personnaly find that separating the rendering from the individual entities is an interesting first step in the direction of viewing your game as a system that does not necessarily need graphics at all. What I mean is that when you put rendering out of the way, you realize lots of the gameplay happens in a "non-graphic world" where the coordinates of the entities, their internal states, etc... is what matters. This opens the door to automated testing, more decoupled system, etc...




All in all, I tend to prefer systems where rendering is done by a seperate class. That does not mean your sprites can not have some attributes that are "graphically related" (animation name, animation frame, height x width, sprite id etc... ), if that makes the renderer easier to write or more efficient.


And I don't know if that would apply to 3D (where the notion of meshes, and the coordinates variable you would use would maybe be tied to your 3D API ; whereas x,y,h,w is pretty much independant of any 2D API).


Hoping this helps.


Tuesday, February 23, 2016

meaning in context - would you throw a light on the concept of the sentence?



In these clinical cases, as in other phobias, the more likely cause is a displacement of diffuse anxiety to an external focus which can be avoided.



I am wondering what this sentence could mean or what is the concept if it.


What is more, would you please in a more readily way show me what is meant by the word clinical cases?


And, really what does external focus mean there??


Extracted from the following; Psychopathology:


Any feed-back would be greatly appreciated



I cannot yet get the following:



the ... cause is a displacement of diffuse anxiety to an external focus which can be avoided.



UPDATED:Thanks. Yet, could anyone please show me the concept of the sentence through a vivd example?




grammaticality - Grammar and rearranging hyperbaton in 'The Rime of the Ancient Mariner'?


https://english.stackexchange.com/a/184742/50720 exemplifies with The Rime of the Ancient Mariner:




The rock shone bright, the kirk no less,
 [1.] That stands above the rock:
The moonlight steeped in silentness
 [2.] The steady weathercock.


And the bay was white with silent light,
 [3.] Till rising from the same,
Full many shapes, that shadows were,
 In crimson colours came.



1. What does That refer to?



2. Is The steady weathercock supposed also to be 'steeped in silentness'? I recognise this as poetry, but what's the phenomenon here called? Why's there no conjunction linking The steady weathercock?


3. How do you determine/deduce the correct (re)order of words in the 3 lines after [3.]?



Answer



That = kirk = church.


Moonlight = subject
steeped = transitive verb
the steady weathercock = direct object of steeped (steady because windless or no changing winds)

The bay was white with silent light, till full many shapes, that were shadows, came rising in crimson colors from the same (i.e. from the bay that was white with silent light).


Compare came running, came galloping, came hopping, came sledding...



Full is not an adjective but an adverb (see http://www.merriam-webster.com/dictionary/full) that modifies many. Very many shapes.


grammar - "I wish I had not done something" vs "I wish I would not have done something"


Tell me please if there is any difference between the following sentences?




I wish I hadn't dropped out of college.


I wish I wouldn't have dropped out of college.



I didn't know that the latter structure existed till I came across this example



He wishes he would’ve checked for his wallet before leaving, but he realized that hindsight is 20/20



on this site.




meaning in context - 'Team' doesn't or don't'?


I have found that in BrE, 'the team' is either singular or plural, while in AmE, it is always singular from here: Team as singular or plural


I read the following line in the article:



Given the right conditions, and the team playing to its potential, Kohli will be disappointed if his team don’t win the series.



Now, my question is: Should we put 'doesn't' in place of 'don't' as looking at the context, here 'team' looks singular? Why?




Monday, February 22, 2016

unity - Need help with a field of view-like collision detector!


I ran into a trouble while making a field of view for my character. I figured how to make it work with a Linecast, but what I really need is a cone-shaped field, so that the character can detect objects or enemies if they get in that field of vision. It also needs to be intersectable by other objects, so OnCollisionEnter won't work. I suspect that Raycasting might solve the problem, but I couldn't quite understand it's workings, because I'm still new. I would really appreciate any ideas that might help to solve it. Here is what I need:


collision detector


Here is what I have so far, but this detector is unsuitable, because it can only detect objects in a straight line:


public Transform sightStart, sightEnd;
// Update is called once per frame
public bool objectSpotted = false;
void Update () {
Raycasting1 ();
}

void Raycasting1()
{
Debug.DrawLine(sightStart.position, sightEnd.position, Color.green);
objectSpotted = Physics2D.Linecast (sightStart.position, sightEnd.position);
}


opengl - I can't make the cube map to use shadow mapping - or it seems so


Basically, I'm following tutorials as I'm very new to OpenGL. I managed to perform shadow mapping towards just one direction, meaning I created 2D texture depth map. Now with omnidirectional shadow mapping I'm having some trouble. Here are the main parts of my code:


First render-creating the cube map


glUseProgram(cubeShader);
glViewport(0, 0, 1024, 1024);
glBindFramebuffer(GL_FRAMEBUFFER, depthCubeMapFBO);
glClear(GL_COLOR_BUFFER_BIT);
glBindFramebuffer(GL_FRAMEBUFFER, depthCubeMapFBO);

glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthCubeMap, 0);


mat4 modelMatrix = mat4(1);
glUniform3f(lightLocationCube, lightPos.x, lightPos.y, lightPos.z);
glUniform1f(farPlaneCubeLoc, 25.0f);
mat4 shadowProj = perspective(double(radians(90.0f)), 4.0 / 4.0, 1.0, 10.0);
lightTransforms.clear();
lightTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3(1.0, 0.0, 0.0), glm::vec3(0.0, -1.0, 0.0)));

lightTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3(-1.0, 0.0, 0.0), glm::vec3(0.0, -1.0, 0.0)));
lightTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3(0.0, 1.0, 0.0), glm::vec3(0.0, 0.0, 1.0)));
lightTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3(0.0, -1.0, 0.0), glm::vec3(0.0, 0.0, -1.0)));
lightTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3(0.0, 0.0, 1.0), glm::vec3(0.0, -1.0, 0.0)));
lightTransforms.push_back(shadowProj *
glm::lookAt(lightPos, lightPos + glm::vec3(0.0, 0.0, -1.0), glm::vec3(0.0, -1.0, 0.0)));


for (int i = 0; i < 6; ++i) {
mat4 nowT = lightTransforms[i];
glUniformMatrix4fv(shadowMatricesCubeLoc[i], 1, GL_FALSE, &nowT[0][0]);
}


glBindVertexArray(objVAO);
glBindFramebuffer(GL_FRAMEBUFFER, depthCubeMapFBO);
glUniformMatrix4fv(modelMatrixCubeLoc, 1, GL_FALSE, &modelMatrix[0][0]);

glDrawArrays(GL_TRIANGLES, 0, objVertices.size());
modelMatrix *= translate(mat4(), vec3(0, 0, 5))*rotate(mat4(), 3.14f, vec3(0, 1, 0));
glUniformMatrix4fv(modelMatrixCubeLoc, 1, GL_FALSE, &modelMatrix[0][0]);
glDrawArrays(GL_TRIANGLES, 0, objVertices.size());

// Drawing a cube
glBindVertexArray(planeVAO);
glBindFramebuffer(GL_FRAMEBUFFER, depthCubeMapFBO);
modelMatrix = scale(mat4(), vec3(10, 10, 0));
modelMatrix = rotate(mat4(), radians(0.0f), vec3(0, 1, 0));

glUniformMatrix4fv(modelMatrixCubeLoc, 1, GL_FALSE, &modelMatrix[0][0]);
glDrawArrays(GL_TRIANGLES, 0, 6);
modelMatrix = scale(mat4(), vec3(10, 10, 0));
modelMatrix = rotate(mat4(), radians(90.0f), vec3(0, 1, 0));
glUniformMatrix4fv(modelMatrixCubeLoc, 1, GL_FALSE, &modelMatrix[0][0]);
glDrawArrays(GL_TRIANGLES, 0, 6);
modelMatrix = scale(mat4(), vec3(10, 10, 0));
modelMatrix = rotate(mat4(), radians(180.0f), vec3(0, 1, 0));
glUniformMatrix4fv(modelMatrixCubeLoc, 1, GL_FALSE, &modelMatrix[0][0]);
glDrawArrays(GL_TRIANGLES, 0, 6);

modelMatrix = scale(mat4(), vec3(10, 10, 0));
modelMatrix = rotate(mat4(), radians(270.0f), vec3(0, 1, 0));
glUniformMatrix4fv(modelMatrixCubeLoc, 1, GL_FALSE, &modelMatrix[0][0]);
glDrawArrays(GL_TRIANGLES, 0, 6);


glBindFramebuffer(GL_FRAMEBUFFER, 0);

}


Earlier I've done the relative 'declarations' (if I'm using the right word) for the cube map :


glGenFramebuffers(1, &depthCubeMapFBO);
glGenTextures(1, &depthCubeMap);
glBindTexture(GL_TEXTURE_CUBE_MAP, depthCubeMap);
for (unsigned int i = 0; i < 6; i++) {
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_DEPTH_COMPONENT, 1024, 1024, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);

}
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);


glBindFramebuffer(GL_FRAMEBUFFER, depthCubeMapFBO);
glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthCubeMap, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glBindFramebuffer(GL_FRAMEBUFFER, 0);


//CubeMapLocs

lightLocationCube = glGetUniformLocation(cubeShader, "lightPos");
modelMatrixCubeLoc = glGetUniformLocation(cubeShader, "M");
farPlaneCubeLoc = glGetUniformLocation(cubeShader, "far_plane");
for (int i = 0; i < 6; i++) {
string added = "shadowMatrices[" + std::to_string(i) + "]";
shadowMatricesCubeLoc[i] = glGetUniformLocation(cubeShader, added.c_str());


}

Now I will not include everything in the second rendering loop as you'll probably see some weird stuff and it's gonna be very long but here is where I send the textures to the shader :


glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, diffuseTexture);
glUniform1i(diffuceColorSampler, 0);

glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, specularTexture);
glUniform1i(specularColorSampler, 1);


glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_CUBE_MAP , depthCubeMap);
glUniform1i(shadowMapLoc, 2);

I'm using a second shader for this loop and now I will include everything in the two shaders


1st loop vertex shader :


    #version 330 core

uniform mat4 M;

layout(location = 0) in vec3 vertexPosition_modelspace;

void main()
{
gl_Position=M*vec4(vertexPosition_modelspace,1.0);

}

1st loop geometry shader :


#version 330 core

layout (triangles) in;
layout (triangle_strip, max_vertices=18) out;

uniform mat4 shadowMatrices[6];

out vec4 FragPos;

void main()
{
for(int face = 0; face < 6; ++face)

{
gl_Layer = face;
for(int i = 0; i < 3; ++i)
{
FragPos = gl_in[i].gl_Position;
gl_Position = shadowMatrices[face] * FragPos;
EmitVertex();
}
EndPrimitive();
}

}

1st loop fragment shader


    #version 330 core
in vec4 FragPos;


uniform vec3 lightPos;
uniform int far_plane;



void main()
{
// get distance between fragment and light source
float lightDistance = length(FragPos.xyz - lightPos);
// map to [0;1] range by dividing by far_plane
lightDistance = lightDistance / 25.0;
// write this as modified depth
gl_FragDepth = lightDistance;
//col=gl_FragDepth*vec4(1,1,1,1);

//col.w=1;
//col=vec4(0.5,0,1.0,1.0);
//fragment_color = vec4(vec3(closestDepth / 25.0), 1.0);
}

2nd loop vertex shader


#version 330 core

// construct input layout for the corresponding attributes
// (vertexPosition_modelspace, vertexNormal_modelspace, vertexUV)

layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec3 vertexNormal_modelspace;
layout(location = 2) in vec2 vertexUV;

// Output variables (position_modelspace, normal_modelspace and UV coordinates),
// that will be interpolated for each fragment
out vec3 vertex_position_modelspace;
out vec3 vertex_normal_modelspace;
out vec2 vertex_UV;


out vec3 vertexWorldspace;

// uniforms (P, V, M)
uniform mat4 P;
uniform mat4 V;
uniform mat4 M;

uniform mat4 lightSpaceTransform;

out vec4 vertex_LightSpace;


void main()
{

vertex_LightSpace=lightSpaceTransform*M*vec4(vertexPosition_modelspace,1.0);

gl_Position = P * V * M * vec4(vertexPosition_modelspace, 1);

// propagate the position of the vertex to fragment shader
vertex_position_modelspace = vertexPosition_modelspace;


//propagte vertex in world space
vec4 vm=M*vec4(vertexPosition_modelspace, 1);
vertexWorldspace=vm.xyz;


//propagate the normal of the vertex to fragment shader
vertex_normal_modelspace = vertexNormal_modelspace;

// propagate the UV coordinates

vertex_UV = vertexUV;
}

Here I will skip the lighting computations as they work fine. I add the extra lines at the end so that the objects appear in a color respective to their depth in the cube map: 2n render loop fragment shader


uniform sampler2D diffuseColorSampler;
uniform sampler2D specularColorSampler;
uniform samplerCube shadowMap;

float ShadowCalculation(vec3 fragPos)
{

vec3 lightPos=light_position_worldspace;
vec3 fragToLight = fragPos - lightPos;

float closestDepth = texture(shadowMap, fragToLight).r;
closestDepth *= 25.0;
float currentDepth = length(fragToLight);
float bias = 0.05;
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}


//skipping to the end of the main loop
//fragment_color is the output

vec3 fragToLight = vertexWorldspace - light_position_worldspace;


float closestDepth = texture(shadowMap, fragToLight).r;
if(closestDepth>0){
fragment_color = vec4(1.0,0,0, 1.0);

}
fragment_color = vec4(vec3(closestDepth / 25.0), 1.0);

Running this, every object is black. I think this means that the variable 'closestDepth' which you see right above, is always zero, meaning something is wrong with the cube map...


I can upload anything else you might need. I've been looking at all this all night and I can't find what's wrong. Any help is greatly appreciated.


Edit: Checking my cube map in renderDoc I found out that it's all black.




xna - Resolving 2D Collision with Rectangles


For about a week I've been trying to grasp the basics of collision, but I've been driven to the point of wanting to just give up on everything when it comes to collision. I just don't understand it. Here's my code so far:



 public static void ObjectToObjectResponseTopDown(GameObject actor1, GameObject actor2)
{
if (CollisionDetection2D.BoundingRectangle(actor1.Bounds.X, actor1.Bounds.Y, actor1.Bounds.Width, actor1.Bounds.Height,
actor2.Bounds.X, actor2.Bounds.Y, actor2.Bounds.Width, actor2.Bounds.Height)) //Just a Bounding Rectangle Collision Checker
{
if (actor1.Bounds.Top > actor2.Bounds.Bottom) //Hit From Top
{
actor1.y_position += Rectangle.Intersect(actor1.Bounds, actor2.Bounds).Height;
return;
}

if (actor1.Bounds.Bottom > actor2.Bounds.Top) //Hit From Bottom
{
actor1.y_position -= Rectangle.Intersect(actor1.Bounds, actor2.Bounds).Height;
return;
}
if (actor1.Bounds.Left > actor2.Bounds.Right)
{
actor1.x_position += Rectangle.Intersect(actor1.Bounds, actor2.Bounds).Width;
return;
}

if (actor1.Bounds.Right > actor2.Bounds.Left)
{
actor1.x_position -= Rectangle.Intersect(actor1.Bounds, actor2.Bounds).Width;
return;

}
}
}

Essentially what it does so far is correctly collides when the bottom of the first rectangle collides with the top of the second rectangle, but with the left an right sides it corrects it either above or below the tile, and when the top of the first rectangle collides with the bottom of the second rectangle, it slides right through the second rectangle.



I'm really not sure what to do at this point.



Answer



Your collision detection code looks like it's working fine, however your collision response appears to be causing the issue. This is evident if you step through how you are actually resolving two colliding bodies.


Let's take a look at the following example (assume the blue rectangle is actor1 and moving left): Colliding bodies


As your code checks the top and bottoms bounds first, these are the first to be resolved and actor1's y position is adjusted like so: enter image description here


The two bodies are now no longer colliding, but the collision wasn't resolved in the best way. This will almost always happen (unless the two rectangles' top and bottom bounds are perfectly in line) and the last two if statements will never be checked.


A primitive way to handle collision response would be to offset the rectangles by the smallest overlap; this would solve the problem in the above example, but you would still find some problems. A better approach would take an object's velocity into consideration, such as the answer found here.


unity - How do I create this windblown snow effect?


How do I create this snowy wind kind of effect, where wisps of snow appear to blow through the air close to the ground?


I have seen this in several games like Skyrim, the new God of War, but I don't know what name to call it by.


Example of windblown snow visible against a cliff face Example of windblown snow visible against a cliff face


I found this God of war gameplay https://youtu.be/b7u59Iutb0c



Answer



Note: I have not used Unity yet, so here's my very high-level/conceptual answer:


Aside from simply spawning particles in a particular area/direction, it looks like Skyrim is using another technique where a semi-transparent "blown snow" texture is quickly scrolling across a flat mesh coming off the ground.



The mesh would be completely stationary, but the texture on it would be scrolling quickly in the direction of the wind. To make a texture scroll across a mesh without moving the mesh itself, just translate the vertices' UV coordinates over time. To make the effect repeat correctly, you'll have to set edge sampling of the texture to 'repeat' or the equivalent (if this is not already the default).


Also, since the mesh you'd be using would be 'paper thin' you may want to disable backface culling on the object in order to see the gust effect from both sides.


Final note: You may want to fade the effect out as it gets closer to the edges of the mesh, otherwise it may look like a cutout. This can be done via the mesh itself. Add extra vertices and triangles near the edges of the mesh, then set the color attributes on the outermost vertices to completely transparent.


Sunday, February 21, 2016

difference between "wouldn't" and "don't"



I have asked a similar question Use of "having" and 'with' in which @james k has given a very useful answer. While answering my question he has written the following sentences: "I'm going shopping with my friend" is ok. "I'm going shopping having my friend" is not. We would not say "I'm having my friend"


Can we say "don't" in the place of "wouldn't".? For example:


1.'we don't say "I'm having my friend" or "we don't want to say" I'm having my friend." Without changing the meaning of the sentence with "wouldn't?




unity - In Unity3D, what is the relation between inertiaTensor and inertiaTensorRotation?


In there are two properties on rigidbody that correspond to the moment of inertia tensor.


One of them is: rigidbody.inertiaTensor, which I know is the diagonal of the inertia tensor,
The other is rigidody.inertiaTensorRotation, which I don't quite understand. I have, though, created a rigidbody in such a way that I get it to be other number than Quatenion.identity, but I still don't see the connection.



Can I describe the products of inertia tensor with the rotation or ... what is the relation between them and where would I require them?



Answer



inertiaTensor is a Vector3, inertiaTensorRotation is a Quaternion. And from the docs for inertiaTensor:



The inertia tensor is rotated by the inertiaTensorRotation.



Essentially, inertiaTensor is the moment of inertia (defined as a tensor) and the inertiaTensorRotation is how that tensor is rotated.


If you'd like them to be calculated automatically, you don't set them. Or you can call Rigidbody.ResetInertiaTensor which:



After calling this function, the inertia tensor and tensor rotation will be updated automatically after any modification of the rigidbody.




unity - How can I adapt A* pathfinding to work with platformers?


I have an A* implementation that works in "top down" situations where gravity is not taken into account for pathfinding. But, I am looking to modify to work in a 2d platformer situation. I am using the Unity Engine in C# but any examples or even pseudocode would be really helpful. I have found a source so far but it haven't been that helpful as I am not understanding the way they have adapted A*, I have listed them below.


http://gamedevelopment.tutsplus.com/tutorials/how-to-adapt-a-pathfinding-to-a-2d-grid-based-platformer-theory--cms-24662 (Very specific to authors implementation and hard to understand)




Answer



You don't need to adapt A* at all. The only consideration is where you put your nodes and how you connect them. The linked article seems to convert from a platformer-friendly model to a grid based pathfinding model, which I don't think you want.


A* itself is a tree search algorithm which finds the optimal path through your graph and requires some heuristic - meaning a function, which establishes how close to the target they are. In your case this can just be Manhattan or Euclidean distance. A* doesn't really care what the graph represents, it just sees it as a graph.


The only concern here is how to create the actual graph and how to determine movement cost between nodes. When moving on a 2 dimensional plane (so not a platformer), I've found that making a move in a cardinal direction (horizontal or vertical) costing '1' and a move in a diagonal direction costing '2.49' works well. Similarly, for your platformer, you will likely want to give a higher cost to jumps and drops.


Crating the graph


As the thread How can I implement platformer pathfinding? says, the way you will create your graph depends heavily on how your platformer works - what mobs can actually do and what you would like them to do. Here are a few examples though:


Simple platformer pathfinding graph


This first example assumes you cannot "jump through" blocks. Meaning you can't jump up through a block to reach a higher level, and can't jump down through a block. The graph can be simplified by removing all non-intersecting nodes (meaning nodes with exactly two ways out). I kept them in the image to contrast it with the next one.


Platformer pathfinding graph with allowed jumps to higher platforms


In this graph, jumps to higher platforms are allowed from lower ones.



Platformer pathfinding graph with allowed drops to lower platforms


In this graph, jumps are allowed to lower platforms. Usually games tend to have 'specially marked' platforms, where drops are allowed, so this sort of connection might only be present with a particular kind of platform in your game.


And of course you can also do this so both drops from platforms and jump to platforms are allowed: (just for the sake of completeness)


Platformer pathfinding graph with allowed drops to lower platforms and jumps to higher platforms


Pathfinding events (animations, physics, jumps)


You will likely want to implement some sort of animation for each of these or some sort of behaviour, which ensures the move actually happens. For example, for jumps you might want to apply some force to the mob. The amount of force to apply you'll need to calculate yourself for your game, or determine through trial and error. What I wanted to mention is that, once the path has been determined and stored in the NPC, and the path is actually being executed (meaning the NPC is following the stored path), you will want to keep track of the next node it's moving towards. Once the node is reached, you trigger some sort of event on the NPC, for example NPC.OnPathfindingNodeReached(action). the parameter action can be stored in the nodes for each of their possible exits. For example:


Actions stored in pathfinding nodes for possible paths


Movement cost and common errors


In terms of common non-obvious errors in pathfinding, they mostly have to do with mismatched movement costs. To ensure decent looking pathfinding, you must ensure that shorter paths always have a smaller cost. Depending on whether you manually or algorithmically assign costs, this can be much harder to do properly than it might seem at first and a major nightmare to debug. I recommend that you give a lot of care to assigning costs when doing this and testing everything as you make it.


Movement cost consideration in pathfinding



I hope this post answered at least some of the questions not answered in the linked post. If you need more help, please comment.


Saturday, February 20, 2016

xna 4.0 - Alpha blending not rendering properly XNA 4.0


I'm trying to render a tree made out of 2 rectangles intersecting in the center at a 90degree angle. The texture has an alpha channel but which ever rectangle gets rendered second causes a weird problem with the transparency.


Here is my rendering code:


public override void render(GraphicsDevice gd, Camera camera)
{
RasterizerState rs = new RasterizerState();
rs.CullMode = CullMode.None;
gd.RasterizerState = rs;
gd.BlendState = BlendState.AlphaBlend;


effect.CurrentTechnique = effect.Techniques["TexturedNoShading"];
effect.Parameters["xWorld"].SetValue(Matrix.CreateTranslation(position));
effect.Parameters["xView"].SetValue(camera.getViewMatrix());
effect.Parameters["xProjection"].SetValue(camera.getProjectionMatrix());
effect.Parameters["xTexture"].SetValue(m_texture);

foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Apply();


gd.Indices = m_indexBuffer;
gd.SetVertexBuffer(m_vertexBuffer);
gd.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, m_vertices.Length, 0, m_indices.Length / 3);
}

rs = new RasterizerState();
rs.CullMode = CullMode.CullCounterClockwiseFace;
gd.RasterizerState = rs;


gd.BlendState = BlendState.Opaque;
}


trigonometry - Raycasting "fisheye effect" question


Continuing my exploration of raycasting, I am very confused about how the correction of the fisheye effect works.


Looking at the screenshot below from the tutorial at permadi.com, the way I understand the cause of the fisheye effect is that the rays that are cast are distances from the player, rather than the distances perpendicular to the screen (or camera plane) which is what really needs to be displayed. The distance perpendicular to the screen then, in my world, should simply be the distance of Y coordinates (Py - Dy) assuming that the player is facing straight upwards.


enter image description here


Continuing the tutorial, this is exactly how it seems to be according to the below screenshot. From my point of view, the "distorted distance" below is the same as the distance PD calculated above, and what's labelled the "correct distance" below should be the same as Py - Dy. Yet, this clearly isn't the case according to the tutorial. My question is, WHY is this not the same? How could it not be? What am I understanding and visualizing wrong here?



enter image description here



UPDATE: Here's another perspective. The tutorial at lodev.org has another way of handling the fisheye effect which confuses me in the same way. This one relies on distance vectors more than angles and calculates the perpendicular distance to the wall according to the below formulas where mapX is the position of the player, rayPosX is the position of the wall that has been hit by a ray, and rayDirX is the direction of the ray (along with its Y counterparts of course). (1-stepX)/2 is simply a way of adding 1 if the ray is on the left side of the field of view. By the same logic as in the above tutorial, the perpendicular distance is simply mapX - rayPosX in my world. Why does rayDirX need to be divided to it?


//Calculate distance projected on camera direction (oblique distance will give fisheye effect!)
if (side == 0)
perpWallDist = fabs((mapX - rayPosX + (1 - stepX) / 2) / rayDirX);
else
perpWallDist = fabs((mapY - rayPosY + (1 - stepY) / 2) / rayDirY);

Answer



I understand it well enough now. What I described in the question only applies when facing perfectly upwards/downwards or to the left/right. When the player faces any other direction than this however, you of course have to convert the X or Y distance to the larger, real perpendicular distance of the camera plane, hence the additions to the formulas. I knew it was simple!



verb forms - Should I use the past tense with did?


Should I use the past tense with did? For example, I was to say:




The important question is: Did they knew what it means or not?



Or should I say:



The important question is: Did they know what it means or not?



In other words: Should I use past tense with did?


I looked at this:


Explanation of "did was [verb]" structure



but it didn't really answer my question.



Answer



Generally speaking, only the first verb in each clause is finite:



They knew what it meant.



Here, know is the finite verb. It changes form to agree with the subject (I know, she knows) and to indicate tense (They know, they knew).


To turn this into a question, we need to apply Subject-Auxiliary Inversion. But to do that, we need an auxiliary, so we add the dummy auxiliary do:



They did know what it meant.



Did they know what it meant?  ← did and they are inverted



Now do is the finite verb. It changes form to agree with the subject (I do, she does) and to indicate tense (They do, they did). The second verb, know, is non-finite and does not change form.


*Did they knew has two finite verbs and is ungrammatical.


grammaticality - When do multiple negatives cancel and when do they not?



Which of the following is not stated in the passage?

(a) Money will not be a factor in making the decision


Here, not only is there a negative in the question, but there is also one in answer (a). It is important not to miss the second one, and also important not to simply think that they cancel each other out. It does not, for example, follow from (a) that what is being said is that ‘money is stated in the passage to be a factor in making the decision’.



I apprehend that silence isn't guilt (in most developed nations), so the absence of (a) in the passage doesn't mean the truth of (a). Yet what's the big picture here; what are the general lessons to be learned? I'd like to cancel, to simplify reading whenever possible.


Source: p 27, Mastering the National Admissions Test for Law, Mark Shepherd




encryption - How to encrypt Save Files without using a key?



Say I made a simple program that takes a file .dat that is encrypted in Binary Format, then I decrypt it into a byte array and then everything is rewritten again into the file decrypted.


For Example:


I've made a binary encryption algorithm. => 0100100001100101001000000110100001100101011000...


But this is very very easy to decrypt...



Is there any way, that, if it's possible does not need any sort of key, to encrypt my save progress?



Answer



In general you should never invent your own cryptographic algorithms, unless you have at least a PhD in both mathematics and computer science. But there are many good stock algorithms which have no known attacks and have free implementations in many programming languages. For example RC5, AES or Blowfish. Depending on which technology you use to develop your game, it might even offer secure encryption out-of-the-box.


However, the question is if encrypting savegames is a good idea.


First, when you have your game executable do the encryption and decryption, you have to include both the algorithm and the key in your game executable. That means a determined hacker can find them, extract them, and use them to build a savegame editor. So it can never be 100% secure.


Second, why do you want to do this anyway? When it is an online game, you should store the gamestate online where the players can not modify it. When it is an offline game, then why bother? A cheater can only hurt their own game experience at worst. The honest players who want to enjoy your game as intended won't be affected by this at all. On the other hand, allowing players to cheat can add value to your game. It allows players to experience the game in a different way which can just increase their long-term enjoyment of your game.


Friday, February 19, 2016

Is it correct to end a sentence with a superlative without followed noun?


Is it correct to end a sentence with a superlative without followed noun?


For example: I am the happiest. He is the fastest.


If it is not correct why do people widely use to say: "It is the best" without any continuous noun later? Is it not correct usage or maybe it is an exception?




Thursday, February 18, 2016

Are abbreviations "proper" words?



As a consequence of a different discussion, I realized this:


Are abbreviations "proper" words?


(I use the broad meaning for abbreviation, not the strictest meaning)


E.g.:



  • C.I.A.

  • abbr.

  • Prof.


  • Dr.

  • ...


Note: of course, the word "abbreviation" in itself is a proper word.


Very related question: How do we define "proper" words?




collision detection - Given a plane and a point, how can I determine which side of the plane the point is on?


Given the point


Vector pos = new Vector(0.0, 0.20156815648078918, -78.30000305175781, 1.0);

and the plane (triangle)



Vector a = new Vector(-6.599999904632568, 0.0, -78.5, 1.0);
Vector b = new Vector(6.599999904632568, 0.0, -78.5, 1.0);
Vector c = new Vector(6.599999904632568, 4.400000095367432, -78.5, 1.0);

I want to get a plane normal pointing in the direction of pos


//Getting plane normal
Vector ac = Vector.Subtract(a,c);
Vector bc = Vector.Subtract(b,c);
Vector planeNormal = Vector.CrossProduct(bc, ac);


//Testing which side of the plane the point is on
double dprod = Vector.DotProduct(planeNormal, pos);
if (dprod < 0)
{
planeNormal.Negate();
}

But this method is wrong. The resulting planeNormal points in the negative Z direction, so it should not be negated. Is there a best practise for this? Please help me, I fail massively @ math :)



Answer



Your method is mostly correct but misses one step. You can't simply use the point's position as the vector to get a dot product with, you need to create a direction vector from a point on the plane. Any point on the plane will do (the direction doesn't need to be exact) so just use one of the corners.



word usage - What is the difference of "use", "utilize" and "employ"


The situation is I'm writing a paper, and I want to use different words to express the meaning of "adopt" a method or approach.


All the three words "use", "utilize" and "employ" have the meaning of


to make use of something


So, is there any difference of the three words in aspect of expressing this meaning. And is there a best word to use in some specific conditions?



Answer



There is a subtle difference. For example, I can use an ice cream maker, but I can utilize the ice cream maker's ability to make ice cream. Employing, on the other hand, is fairly similar to the word "Use", but it is also harder to explain, so I apologise if I get this part wrong.



When employing something, one is exicuting it, so, for example, if I were to say, "I employed the ice cream maker for churning." I would be saying that I used the ice cream maker much in the same way one would use "use". The only difference I can see it that use is more of a passive word, but employed tends to signify more of an involvement, as well as more of an emphesis of the action being done; though they are, almost always, interchangeable.




meaning - What does the phrase 'waking thoughts' mean?



Passage:



Suddenly he was standing on short springy turf, on a summer evening when the slanting rays of the sun gilded the ground. The landscape that he was looking at recurred so often in his dreams that he was never fully certain whether or not he had seen it in the real world. In his waking thoughts he called it the Golden Country



(George Orwell 1984)


Question:



What does the phrase 'waking thoughts' mean?



Link to the book (page #21)




Answer



The Cambridge Dictionary provides different definitions of waking as a noun and as an adjective. The noun meaning relates to the moment of waking. In this context, the word is used as an adjective, and so the adjectival definition is relevant: used to refer to a period of time or an experience during which you are awake.


According to this definition, waking means marked by full consciousness, awareness, and alertness. To quote further from the link:



This adjective most often occurs in phrases such as “every waking moment", “every waking hour", “every waking breath", and so on, the sense being roughly “at all times". Such phrases are often used together with possessives, such as in “her every waking moment" or “my every waking thought".



So, waking thoughts refers to anything that you think about when you are not asleep.


The expression can be used to describe what you are always thinking about: this might be relevant in this context, but it is more likely to simply be a contrast with the reference earlier in the paragraph to in his dreams.


grammar - Can “do not” be together in negative question?



I see this quote in a lifehacker article:



“Why do not the mothers of mankind interfere in these matters to prevent the waste of that human life of which they alone bear and know the cost?”



I’ve read the grammar rule that says only the negative contradiction can be in front of the subject. So it should be ”why do the mothers of mankind not....”.




Wednesday, February 17, 2016

LOVE Physics - Breaking Joint Chains (LUA / Box2D)


Wasn't sure whether to post here or on SO so please move if needed.


I've been having a look into the box2D physics API provided by LOVE to try and create a swinging flail or weighted rope but I'm having some trouble keeping the jointed elements close together.


So far I've made a series of circular bodies and linked them together using revolute joints with the first element being static and the final element being slightly larger/heavier to act as a weight.


The code to generate each link is as follows:



    for i = 1, segments, 1 do

link = {}

if (i == 1) then
link.body = love.physics.newBody(world, xpos, ypos, "static") --Starting link
else
link.body = love.physics.newBody(world, xpos, ypos, "dynamic")
end


if (i == segments) then
link.shape = love.physics.newCircleShape(endlink_radius) --Ending link
else
link.shape = love.physics.newCircleShape(link_radius)
end

link.fixture = love.physics.newFixture(link.body, link.shape) --Fix bodies to shapes

table.insert(chain,link)


ypos = ypos + link_distance --Place next link further down, keeping x position

end

And the code to join each link:


    for i = 2, #chain, 1 do

x1,y1 = chain[i-1].body:getPosition() --First link position
x2,y2 = chain[i].body:getPosition() --Second link position
chain[i-1].join = love.physics.newRevoluteJoint(chain[i-1].body, chain[i].body, x1, y1, x2, y2, false ) --Join the two with a revolute joint


end

And this works quite well, producing a chain like this:


Chain1


The next step is to move the chain, I'm currently moving the first static element and having it "drag" the rest of the links along with it using the following code:


    position = vo.add(position, velocity) --Vector addition add velocity to current position to get new position
chain[1].body:setPosition(position.x, position.y) --Update the first link position

This works fine for slow speeds (chain currently moves towards mouse cursor) but any rapid movement causes everything to fall apart.



Chain2


Is there anyway to easily keep the elements close together?


I've tried adjusting the weights and world gravity but couldn't see any difference, I've also thought about applying a force to each element when the first one moves but that doesn't seem quite right.


Thanks


Solution (Thanks to NauticalMile)


Change the previous position update:


    position = vo.add(position, velocity) --Vector addition add velocity to current position to get new position
chain[1].body:setPosition(position.x, position.y) --Update the first link position

To a velocity update and have Box2D handle the movement:



    chain[1].body:setLinearVelocity(velocity.x, velocity.y)

Answer



Directly editing the position of a box2d body will produce non-physical behaviour, it's present because sometimes you will need to teleport bodies to a far away location, or reset the position of an object without the need to destroy/re-create it, etc... In your case the joints are not handling the position adjustments well.


The setPosition function should not be used to advance the physics, box2d takes care of that all on its own when b2World:step is called. In order for the chain to move smoothly, you just need to manipulated the top link in a different manner. I propose one of two suggestions:



  1. If the top link in the chain is supposed to be attached to something much bigger moving at a constant velocity, then I would recommend changing the bodyType of the top link to 'kinematic' using the Body:setType function. Just initialize the velocity of the top link body to a reasonable value. This is demonstrated in RUBE:


kinematic_body



  1. If you want the top link to have a dynamic behaviour, while still constraining it to only move along a given axis, you can create a 'dummy' static body, and connect the top link to the static body via a PrismaticJoint. Again, here's an example done in RUBE where I have applied a small force at the beginning using the MouseJoint:



prismatic_joint


In the real world objects accelerate when forces are applied to them. box2d functions in much the same way; enter the body:applyForce function. You can use this to apply a force toward the cursor position. You can also add some LinearDamping to the first or any of the links to make sure things don't get too out of hand.


Both of these options should be very stable, but let me know if there are any issues.


Simple past, Present perfect Past perfect

Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...