Tuesday, March 31, 2015

rendering - Should actors in a game be responsible for drawing themselves?


I am very new to game development, but not to programming.


I am (again) playing around with a Pong type game using JavaScript's canvas element.


I have created a Paddle object which has the following properties...



  • width


  • height

  • x

  • y

  • colour


I also have a Pong object which has properties such as...



  • width

  • height

  • backgroundColour


  • draw().


The draw() method currently is resetting the canvas and that is where a question came up.


Should the Paddle object have a draw() method responsible for its drawing, or should the draw() of the Pong object be responsible for drawing its actors (I assume that is the correct term, please correct me if I'm incorrect).


I figured that it would be advantagous for the Paddle to draw itself, as I instantiate two objects, Player and Enemy. If it were not in the Pong's draw(), I'd need to write similar code twice.


What is the best practice here?


Thanks.



Answer



Having actors draw themselves is not a good design, for two main reasons:


1) it violates the single responsibility principle, as those actors presumably had another job to do before you shoved render code into them.



2) it makes extension difficult; if every actor type implements its own drawing, and you need to change the way you draw in general, you may have to modify a lot of code. Avoiding overuse of inheritance can alleviate this to some extent, but not completely.


It's better for your renderer to be handling the drawing. After all, that's what it means to be a renderer. The renderer's draw method should take a "render description" object, which contains everything you need to render a thing. References to (probably shared) geometry data, instance-specific transformations or material properties such as color, et cetera. It then draws that, and doesn't care what that render description is supposed to "be."


Your actors can then hold on to a render description they create themselves. Since actors are typically logic processing types, they can push state changes to the render description as needed -- for example, when an actor takes damage it could set the color of its render description to red to indicate this.


Then you can simply iterate every visible actor, enqeue their render descriptions into the renderer, and let it do its thing (basically; you could generalize this even further).


word usage - Exist vs exists in mathematics


Are there any rules about when to say exist and when to say exists in mathematics? For example, both these sentences appear in a book of mine:



There exist αi in I such that xn = Σ αi xi.


There exists s'' in S such that s''(s'msm') = 0.



I'm a bit confused about when to use exist and when exists.




Answer



Exists and exist follow the ordinary convention for verbs: one is singular and the other is plural. Where mathematical usage differs from ordinary usage is in the way singular and plural are indicated in the subject that follows, and an implied “for all” later if the subject is plural.


Exists is singular:



There exists s'' in S such that s''(s'msm') = 0.



Or, spelled out more explicitly:



There exists a number s'' in the set S, such that s''(s'msm') = 0.




Exist is plural:



There exist αi in I such that xn = Σ αi xi.



This is where the mathematical usage differs from ordinary usage. Spelled out explicitly, this would be:



There exist numbers α1, α2, α3, etc., which are elements of the set I, such that for each subscript, if we refer to the subscript as i, then xn = Σ αi xi.



The convention is tricky for a beginner to understand because it depends on your knowing that i is commonly used as a variable that will stand for multiple subscripts. The plural form of “exist” is actually a helpful clue. However, you must understand the implied “for each” later in the sentence. It’s implied by the fact that the sentence uses i to stand for multiple values.


How can I get voice recognition features into the Unity Game Engine?




How can I get voice recognition features into the Unity Game Engine? Is there a plug-in or a framework (hopefully freeware) that I could use? If so, do you have any ideas on how to install it? Also, how much of a problem would there be with background noises in the game interfering with the voice inputs into the game? Are there any examples of games on the market that use this? (besides for Spain 3D for the Torque Game Engine).




Monday, March 30, 2015

A word for "getting colored"?


Is there a word or phrase for changing from being transparent (or colorless, or maybe white) to a solid color? "Getting colored" somehow doesn't feel right. Or maybe something with "tint"? I don't know.


The context in which I'm wondering would be something like this:




The status bar of this window gradually [...] as you scroll down.




Answer



Turns color or colors



The status bar gradually turns color as you scroll down.


The status bar gradually colors as you scroll down.



Turn color is clearer because to color more commonly means that a subject causes an object to become colored. However, using status bar colors is grammatical:




verb (used without object)
25.
to take on or change color:
The ocean colored at dawn.



"color". Dictionary.com Unabridged. Random House, Inc. 19 Jan. 2016


Some examples of turns color follow.


Example:




there's an arsenal of equipment out there: # Digital thermometers, including one that beeps when it hits top degree (from $ 5 to $ 8), and a " talking " model that tells you a child's temperature (about $ 15). These take about 30 seconds to work. # Forehead strips ($ 2 to $ 3), based on liquid crystal technology, in which a thin plastic strip is placed against a dry forehead. A black bar on the strip turns color to indicate body temperature, in about 15 seconds. # Tympanic thermometers, like Thermoscan and Omron brands (starting at about $ 50 to $ 75 or more), which are pointed at a child's ear and give a digital reading in a few seconds.



Source Information:
Date 1998 (19980125)
Publication information LFS; Pg. G-06
Title Coping with a fever Lots of options to take child's temperature
Author By Diane Eicher, Denver Post Health Writer
Source Denver Post


Retrieved from the Corpus of Contemporary American English on turns color, today.


===============



Example:



'Smart Condom' Turns Color Depending On Your STD



A feature article at Complex.com


===============


Example:



Heat sensitive material on the mug surface turns color when hot liquid is poured in.




Merchandise description for color-changing mug listed on ebay.com


Rules for "on", "at", and "in": preposition of time


I have found the following usage of "on", "at" and "in" on the internet. Is there any other exception and/or rule for that?



Prepositions of Time


Use in for



  • Months: in April / in September / in that month

  • Seasons: in (the) summer / in (the) winter

  • Years: in 1332 / in 1984 / in that year / in the next year


  • Long Period(s) of Time: in former century / in 90's / in the Ice Age / in the past / in the future


Exceptions: in the morning / in the evening


Use at for



  • Time: at 8 o'clock / at 9:30 / at bedtime / at sunset / at dinnertime / at 5:33:10 AM


Exceptions: at Christmas (= during the Christmas holidays but not necessarily on December 25th) / at Easter / at noon / at night / at the weekend / at the present time / at the moment


Use on for




  • Days: on Monday / on Friday / on Christmas Day (= on December 25th) / on Easter Sunday / on Independence Day

  • Dates: on February 18th / on her birthday / on 21 March 2015 / on


Notice:



  • We have in the morning and on Monday but we say on Monday morning, not in Monday morning, and so on.

  • When we say last, next, every, this we do not also use at, in, on: She runs next Tuesday. (not on next Tuesday) / He leaves us every Easter. (not at every Easter) / I See you this evening. (not in this evening)

  • Look at these examples for the combination of times in a sentence: We will meet next week at six o’clock on Monday. / I heard a funny noise at about eleven o’clock last night. / It happened last week at seven o’clock on Monday night.




Look at this answer as well which says:



This all hints at a coherent metaphor: hours and other short periods of time are places; days are surfaces; months and longer time periods are containers.




Answer



The chart could be greatly improved by expanding the category and examples of when we use at. While at is used for clock times, it is also used for other specific times, and thus the many uses are no longer "exceptions" but actual regular uses:


at midnight
at breakfast time
at lunch time
at dinner time

at sunrise
at sunset
at the moment
at (the) present, at the present moment/time
at the right time
at the same time
at dawn
at dusk
at noon
at night

at nighttime


Indeed, we could label at as referring to specific times, and in to refer to relatively nonspecific time periods (akin to During a month, a season, a year, a decade, a century, a nonspecific period of time); while on refers to specific days and dates.


So in would also include:


in the past/future
in those days
in the good old days
in my youth
in my heyday
in my prime
in my old age

in my high school days
in my college years


Thus:
in the morning, in the mornings
in the afternoon(s)
in the daytime
in the evening(s)
in the night
in the middle of the night


are no longer "exceptions to the rule," but in accord with "nonspecific time periods" (as compared to at night and on Friday nights).



geometry - 2D isometric: screen to tile coordinates


I'm writing an isometric 2D game and I'm having difficulty figuring precisely on which tile the cursor is. Here's a drawing:




where xs and ys are screen coordinates (pixels), xt and yt are tile coordinates, W and H are tile width and tile height in pixels, respectively. My notation for coordinates is (y, x) which may be confusing, sorry about that.


The best I could figure out so far is this:


int xtemp = xs / (W / 2);
int ytemp = ys / (H / 2);
int xt = (xs - ys) / 2;
int yt = ytemp + xt;

This seems almost correct but is giving me a very imprecise result, making it hard to select certain tiles, or sometimes it selects a tile next to the one I'm trying to click on. I don't understand why and I'd like if someone could help me understand the logic behind this.


Thanks!




Answer



For accurate measure, we could consider following:


Lets first consider how to transform coordinates from isometric space, determined by i and j vectors (as in isometricMap[i,j]) or as the yt and xt on the screen, to screen space, determined by x and y of the screen. Lets assume your screen space is aligned at origin with isometric space for simplicity's sake.


One way to do the transform is to do a rotation first, then scale the y or x-axis. To get the necessary values to match your yt and xt I can't quite come up with on the spot here. You may create a matrix to do this or not and then use reverse matrix, but the reverse operation is basically what you want.


Scale the value in reverse and then rotate backwards to get the values and round downwards.


There are other ways to this I guess, but this seems most proper to me right now.


c++ - Terrain shader from heightmap opengl GLSL


I generated a terrain from a heightmap and now I'd like to apply shader on it which can contain different textures, based on height but I can't adapt any online code to my project. This is the GL_LINE rendered image enter image description here


The matrix generated from the heightmap has 0-255 values and -1 as the endline character.


This is the code of the terrain, the matrix is stored in a map> data;structure:


#include "Terrain.h"

Terrain::Terrain(Scene* s, string filename): Entity(s) {

this->hm = s->getHeightmap(filename);
texture = scene->getTexture("avatar2.jpg");
shader = scene->getShader("terrain.shader");
}

Terrain::~Terrain() {
}

void Terrain::draw() {
cout << "Draw Terrain" << endl;

glPolygonMode(GL_FRONT_AND_BACK,GL_LINE);
//glUseProgramObjectARB(0);
glUseProgram(this->shader->getRes());
glTranslated(0, -20, -31.4);
glRotatef(45, -1, 0, 0);

const float offX = -25;
const float offY = 0;
const float lato = 0.5;
const int limit = 100;

const int limsx = 0;

glBegin(GL_QUADS);
for (float i = limsx; i < limit-1; ++i) {
for (float j = limsx; j < limit-1; ++j) {
glVertex3f(offX+lato*(j+1), offY+lato*i, hm->getHeight(i,j+1)/50);
glVertex3f(offX + lato*(j + 1), offY + lato*(i + 1), hm->getHeight(i + 1, j+1)/50);
glVertex3f(offX + lato*j, offY + lato*(i + 1), hm->getHeight(i+1, j)/50);
glVertex3f(offX + lato*j, offY + lato*i, hm->getHeight(i, j)/50);
}

}
glEnd();
}

Can you help me generating a terrain shader? Thank you very much


EDIT: thanks a lot to sakul_ca. This is what I got by using and editing his code


enter image description here


I used a shader that applies 4 textures to different heightmap values



Answer



To give you exactly what you are asking for, look at the bottom two code examples.



To use those code examples, there are other code examples preceding it. These examples are using Vertex Buffer Objects. Which is something you should start using, and not "glBegin/glVertex/glEnd", because those are all deprecated OpenGL functions.


You are asking for multiple textures in your question too. You can do that by adding another texture with the shaders, and set them in the appropriate places in the C++ code.


Load Function


void LoadModel ( void )
{
// ...
// 1. Load the model from file.
// 2. Get model information
// - Number of vertices
// - Texture coordinates

// 3. Make Data information
// - e.g.
// - Vertex_p4t4* pTempVertArray = new Vertex_p4t4[numVertices *2];
// - GLuint* pIndexArrayLocal = new GLuint[numIndicesInIndexArray * 2];
// ...

GLuint vboID;
glGenVertexArrays( 1, &vboID );
// Check for OpenGL error.


glBindVertexArray( vboID );
// Check for OpenGL error.

// [SimpleTextureShader.vertex.glsl]
// layout (location=0) in vec4 in_Position
// layout (location=1) in vec4 in_UVx2;
glEnableVertexAttribArray ( 0 );
glEnableVertexAttribArray ( 1 );
// Check for OpenGL error.


GLuint vertexBufferID;
GLuint indexBufferID;
glGenBuffers ( 1, &vertexBufferID );
glGenBuffers ( 1, &indexBufferID );
// Check for OpenGL error.

glBindBuffer ( GL_ARRAY_BUFFER, vertexBufferID );
// Check for OpenGL error.

// In this case [sizeof ( YOURVERTEXDATA )] = 32)

unsigned int vertSize = sizeof ( YOURVERTXDATA );
unsigned int size = numVerticesInModel * vertSize ;
glBufferData ( GL_ARRAY_BUFFER, size, VERTEXDATA, GL_STATIC_DRAW );
// Check for OpenGL error.

glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, vertSize, 0 );
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, vertSize, 16 );
// Check for OpenGL error.

glBindBuffer ( GL_ELEMENT_ARRAY_BUFFER, indexBufferID );

// Check for OpenGL error.

glBufferData ( GL_ELEMENT_ARRAY_BUFFER, TOTALSIZEINBYTES, pIndexArrayLocal, GL_STATIC_DRAW);
// Check for OpenGL error.

glBindVertexArray( 0 );
// Check for OpenGL error.

// ...
// Delete things you don't need anymore

// ...
// Add model information to wherever you want
// ...

return;
}

Render Function


void Render ( void )
{

// Set the texture shader program
glUseProgram(currentProgramID);


// uniform mat4 ModelMatrix;
// uniform mat4 ViewMatrix;
// uniform mat4 ProjectionMatrix;
// uniform sampler2D texture_0;

// Normally you would get this information when loading the shader, and save all of the data somewhere else instead of getting it every time you render.

GLint ModelMatrixUniformLocation = getUniformLocation ( currentShaderProgramID, "ModelMatrix" );
GLint ViewMatrixUniformLocation = getUniformLocation ( currentShaderProgramID, "ViewMatrix" );
GLint ProjectionMatrixUniformLocation = getUniformLocation ( currentShaderProgramID, "ProjectionMatrix" );
GLint Texture0UniformLocation = getUniformLocation ( currentShaderProgramID, "texture_0" );

// I'm using glm for this example, you can use whatever math library you wish, or write your own.

// Create a model matrix to use
glm::mat4 matModel = glm::mat4( 1.0f ); // initialize as identity;


// Create a view matrix to use
glm::mat4 matView = glm::lookAt(eye, look, up);

// Create a projection matrix to use
glm::mat4 matProj = glm::perspective( fovy, aspect, zNear, zFar );

// OpenGL does matrix Transformation Calculations backwards, so you want to do:
// Post Rotation, Translating, Pre Rotation, Scale
// We are only translating in this example
matMode = matMode = glm::translate ( matMode, ObjectPosition );


// Send the matrix values to the shader program
glUniformMatrix4fv( ModelMatrixUniformLocation, 1, GL_FALSE, glm::value_ptr ( matModel ));
glUniformMatrix4fv( ViewMatrixUniformLocation, 1, GL_FALSE, glm::value_ptr ( matView));
glUniformMatrix4fv( ProjectionMatrixUniformLocation, 1, GL_FALSE, glm::value_ptr ( matProj));

// Send the texture information to the shader program
glActiveTexture(textureEnum);
// Check for OpenGL error.


glBindTexture(GL_TEXTURE_2D, textureID);
// Check for OpenGL error.

glUniform1i ( Texture0UniformLocation, textureID );
// Check for OpenGL error.


// Render your object(s) [ vboID from LoadTexture function ]
glBindVertexArray ( vboID );
// Check for OpenGL error.


unsigned int numberOfIndices = NumberOfTrianglesInMesh * 3;
glDrawElements ( GL_TRIANGLES, numberOfIndices, GL_UNSIGNED_INT, (GLvoid*)0 );
// Check for OpenGL error.

// All done
return;
}

SimpleTextureShader.vertex.glsl



#version 400

layout (location=0) in vec4 in_Posiiton;
layout (location=1) in vec4 in_UVx2;

out vec4 ex_Position;
out vec4 ex_UVx2;

uniform mat4 ModelMatrix;
uniform mat4 ViewMatrix;

uniform mat4 ProjectionMatrix;

void main ( void )
{
mat4 MVPMatrix = ProjectionMatrix * ViewMatrix * ModelMatrix;
gl_Position = MVPMatrix * in_Position;

ex_Position = gl_Position;
ex_UVx2 = in_UVx2;


return;
}

SimpleTextureShader.fragment.glsl


#version 400

// From the vertex shader
in vec4 ex_Position;
in vec4 ex_UVx2;


// Texture information
uniform sampler2D texture_0;

// If you want multiple textures...
// uniform samples2D texture_1;
// uniform samples2D texture_2;
// uniform samples2D texture_3;
// ...

void main ( void )

{
vec4 colour = vec4 ( 0.0f, 0.0f, 0.0f, 0.0f );
colour = texture( texture_0, ex_UVx2.xy );

// if you want multiple textures...
// vec4 texture1Colour = texture( texture_1, ex_UVx2.xy );
// vec4 texture2Colour = texture( texture_2, ex_UVx2.xy );
// vec4 texture3Colour = texture( texture_3, ex_UVx2.xy );
// Then do what you want with them.. e.g.
// vec4 tex1Times2 = texture1Colour * texture2Colour;

// vec4 texHeightColour = texture1Colour * ( 10.0f - gl_Position.y )
// texHeightColour += texture2Colour * gl_Position;
// colour= clamp ( texHeightColour, 0.0f, 1.0f );

out_Colour = colour;

return;
}

c++ - How to lead a moving target from a moving shooter


I saw this question: Predicting enemy position in order to have an object lead its target. My situation is a little different though.



My target moves, and the shooter moves. Also, the shooter's velocity is added to the bullets' velocities, i.e. bullets fired while sliding to the right will have a greater velocity toward the right.


What I'm trying to do is to get the enemy to be able to determine where they need to shoot in order to hit the player. Using the linked SO solution, unless the player and enemy are stationary, the velocity difference will cause a miss. How can I prevent that?




Here is the solution presented from the stack overflow answer. It boils down to solving a quadratic equation of the form:


a * sqr(x) + b * x + c == 0

Note that by sqr I mean square, as opposed to square root. Use the following values:


a := sqr(target.velocityX) + sqr(target.velocityY) - sqr(projectile_speed)
b := 2 * (target.velocityX * (target.startX - cannon.X)
+ target.velocityY * (target.startY - cannon.Y))

c := sqr(target.startX - cannon.X) + sqr(target.startY - cannon.Y)

Now we can look at the discriminant to determine if we have a possible solution.


disc := sqr(b) - 4 * a * c

If the discriminant is less than 0, forget about hitting your target -- your projectile can never get there in time. Otherwise, look at two candidate solutions:


t1 := (-b + sqrt(disc)) / (2 * a)
t2 := (-b - sqrt(disc)) / (2 * a)

Note that if disc == 0 then t1 and t2 are equal.




Answer



Okay, let's put some sanity into this. I am afraid you are not making it easy at all, your code does not compile, is inconsistent with regards to variable names (playerVelocityX becomes playerXvelocity after a few lines? what is xVelocity?) and is too verbose. It is basically impossible to debug lest you put considerable effort into it.


So, here are the things to fix:


Bullet speed


The bullet speed must be 30, period. There is no need for the computations you are doing: the change of the frame of reference is precisely there to avoid the complexity. You only add the enemy's velocity after you found a solution, when you go back to the main reference frame.


Solution validity


You are not checking that the time solution is positive.


Numerous coding errors


You are testing time1 and time2 but always using time1 in the results.


You do playerXvelocity - yVelocity which is inconsistent.



You are doing / 2 * a instead of / (2.f * a). This is the worst error and it's why everything is going wrong.


You compute shootx and shooty as the final position of the bullet, whereas what you are looking for is the velocity of the bullet.


Fixed code


float const bulletSpeed = 30.f;
/* Relative player position */
float const dx = playerX - enemyX;
float const dy = playerY - enemyY;
/* Relative player velocity */
float const vx = playerVelocityX - enemyVelocityX;
float const vy = playerVelocityY - enemyVelocityY;


float const a = vx * vx + vy * vy - bulletSpeed * bulletSpeed;
float const b = 2.f * (vx * dx + vy * dy);
float const c = dx * dx + dy * dy;
float const disc = b * b - 4.f * a * c;

shouldShoot = false;

if (disc >= 0.f)
{

float t0 = (-b - std::sqrt(disc)) / (2.f * a);
float t1 = (-b + std::sqrt(disc)) / (2.f * a);
/* If t0 is negative, or t1 is a better solution, use t1 */
if (t0 < 0.f || (t1 < t0 && t1 >= 0.f))
t0 = t1;
if (t0 >= 0.f)
{
/* Compute the ship's heading */
shootx = vx + dx / t0;
shooty = vy + dy / t0;

heading = std::atan2(shooty, shootx) * RAD2DEGREE;
/* Compute the bullet's velocity by adding the enemy's velocity */
bulletVelocityX = shootx + enemyVelocityX;
bulletVelocityY = shooty + enemyVelocityY;

shouldShoot = true;
}
}

relative pronouns - How to use THAT and WHO



I have some doubts about the usages of That and Who. Sometimes I read sentences such as. "You are someone I love" "You are someone WHO I love" Or "People were asked to describe the qualities they look for in a friend" "People were asked to describe the qualities THAT they look for in a friend"


Why in some of these sentences WHO or THAT are omitted and others are not. Could somebody help me with this? I'll appreciate it! Thanks a million.




poetry - weep "to have" that which it fears to lose



This thought is as a death which cannot choose
But weep to have that which it fears to lose.


Sonnet 64, Shakespeare



What is this to after weep? Is it like "I am sad to hear your father's death?"



Answer



You have parsed this correctly. An infinitive clause subordinated to a clause expressing a strong emotion usually expresses the cause of the emotion:




I am sad/saddened/distressed/sorry/dismayed  to hear of your father's death.
He was angry/angered/enraged  to find his orders had not been obeyed.
They were amazed/astonished/thunderstruck  to discover the town still thriving.
John was greatly relieved  to find his wallet where he left it.



This is also true when the head clause expresses an emotional reaction rather than a state:



I weep   to hear of your father's death.
He rejoiced  to see his enemy brought low.
The little dog laughed  to see such sport.




phrase usage - Are 'by the time' and 'when' interchangeable?


Let's say I am missing my novel and I think that my friend might have taken it after I left the place.




1: By the time I left my apartment, my friend was still there, so he might have taken it.


2:When I left the apartment, my friend was still there, so he might have taken it.



I know what 2nd expression means, but not so sure about the first expression. Does it convey the same meaning as the 2nd expression?



Answer



In this context, the term when points out a specific point in time (when you left the apparment), while the phrase by the time… suggests a passage of time that took too long.


So your 2nd sentence (when I left…) better conveys what you are trying to say. The first sentence (by the time I left…) doesn't really convey the correct circumstances of your assertion.


For example:



The injuries didn't look too serious, but by the time the ambulance arrived, the victim was dead.

(Specifies that the delay was at issue)



versus:



The injuries didn't look too serious, but when the ambulance arrived, the victim was dead.
(Still technically correct, but missing the passage of time as an issue)



Sometimes that context is important, so the two phrases are not always interchangeable.



When you turn on the switch, the light goes on. — correct



By the time you turn on the switch, the light goes on. — doesn't make sense



grammar - If you were or if you are?


Which is correct?



If you were going to bet on that horse



or




If you are going to bet on that horse





2D Side Scrolling game and "walk over ground" collision detection


The question is not hard, I'm writing a game engine for 2D side scrolling games, however I'm thinking to my 2D side scrolling game and I always come up with the problem of "how should I do collision with the ground". I think I couldn't handle the collision with ground (ground for me is "where the player walk", so something heavily used) in a per-pixel way, and I can't even do it with simple shape comparison (because the ground can be tilted), so what's the correct way? I'know what tiles are and i've read about it, but how much should be big each tile to not appear like a stairs?Are there any other approach?


I watched this game and is very nice how he walks on ground: http://www.youtube.com/watch?v=DmSAQwbbig8&feature=player_embedded


If there are "platforms" in mid air, how should I handle them?I can walk over them but I can't pass "inside". Imagine a platform in mid air, it allows you to walk over it but limit you because you can't jump in the area she fits


Sorry for my english, it's not my native language and this topic has a lot of keywords I don't know so I have to use workarounds Thanks for any answer


Additional informations and suggestions: I'm doing a game course in this period and I asked them how to do this, they suggested me this approach (a QuadTree): -All map is divided into "big nodes" -Each bigger node has sub nodes, to find where the player is -You can find player's node with a ray on player position -When you find the node where the player is, you can do collision check through all pixels (which can be 100-200px nothing more)


Here is an example, however i didn't show very well the bigger nodes because i'm not very good with photoshop :P


alt text


How is this approach?





Sunday, March 29, 2015

meaning - "Keep dreaming" vs. "keep on dreaming"


Which is correct? Is there a difference in meaning?




python - How to make a Infinite Verticle Scrolling Background in Pygame


Is there anyone who knows how to create an infinite verticle scrolling background in pygame?


Your help would be much appreciated, thanks in advance!




graphics - Why is Y up in many Games?


I learned at school that the z-axis is up. It is the same in modeling software like Blender. However in many games the y-axis is up.



What is the reason?



Answer



I think the direction of the coordinate axes are holdovers from different domains where the crucial plane was different, and X/Y were aligned with that crucial plane. In some applications the ground plane was the most important, thus X/Y were the ground and Z ended up perpendicular to that. For games however the crucial plane is usually the screen (especially of course back when they were 2D and just starting to transition to 3D) thus X/Y were the screen and then when games went 3D Z ended up perpendicular to that.


You can see that kind of distinction between the two biggest 3D art tools: 3ds max and Maya. The Z axis is up in 3ds max because that grew out of architectural tools, while the Y axis is up in Maya because that grew out of movie-making tools.


The important thing to realize when comparing any specific tool to what you learned in school is that it's all arbitrary. It really doesn't matter which way the axes are pointed as long as you keep everything consistent and translate correctly between different coordinate systems.


verbs - Can you explain which word is connected to the word 'left'? What is the grammatical construct of the bold?


I am trying to understand the bold part of this quote from "Sky-high house prices in the most desirable cities are holding back growth and jobs":



As transport costs started to fall at the beginning of the 20th century, many of the manufacturing firms clustered in cities in developed countries left in search of cheaper land and labour.




  • Which word is connected to the word "left"?

  • Who left? The "developed countries", or "the manufacturing firms"?

  • The word "left" is after "countries". How can you know the "firms" left?

  • What tenses are involved? "firms clustered" and "developed countries that left" suggest past tense.




Answer




As transport costs started to fall at the beginning of the 20th century, many of the manufacturing firms clustered in cities in developed countries left in search of cheaper land and labour.



Many of the firms left. ("Many" = subject; "of" = preposition; "the" = definite article, which is a kind of determiner; "firms" = noun, the superset that the subject is part of; "of the firms" = prepositional phrase acting as an adjective; "left" = verb)


What kind of firms? Manufacturing firms. ("Manufacturing" is a gerund acting like an adjective.)


How were the "manufacturing firms" organized? They were clustered. ("Clustered" is a past-participle at the head of an adjectival phrase.)


Where were the "manufacturing firms clustered"? They were in cities. ("in" is a preposition; "in cities" is a prepositional phrase that acts as an adverb.)


Where were the "cities" the "manufacturing firms clustered in"? They were in developed countries. ("in" is a preposition; "developed" is a past-participle that describes "countries"; "countries" is a noun; "in developed countries" is a prepositional phrase that acts as an adjective.)



Why did "the firms" leave? They were "in search of" . ("in" = preposition; "search" = noun form of a verb; "of" = prepositional phrase that completes the idiom; the whole phrase acts as an adverb.)


What were "the firms" seeking? They were "in search of" cheaper land and labour. ("land" = object; "cheaper" = comparative adjective describing both "land" and "labour"; "labour" = "object"; "and" = conjunction; "land and labour" = compound object; "of cheaper land and labour" = prepositional phrase that acts as an adverb.)


What is the "cheapness" of the "land and labour" being compared to? It is implied that the cost "of land and labour" (in the places the firms moved to) was cheaper than the cost "of land and labour" (in the places the firms moved from).


ellipsis - "Anyone have an extra apartment there?"



Anyone have an extra apartment there?



This quote is from an English native speaker. Why "anyone have"?


This could be an elliptical question, but I'd expect native speakers to ask a question using an affirmative sentence as in:



Anyone knows what happened?



Here is the full quote:




Friends. Hi! Sitting here on a Friday night brainstorming honeymoon options with David. Who has ideas?!? Where is a great place to visit at the end of March? We have ideas all over the place. One option we are thinking about Paris ... anyone have an extra apartment there?



If it is an elliptical question, what do you think is more common in everyday spoken English? Elliptical or questions in affirmative forms?


Edit:


Thanks to everyone for answering. I was looking for the full meaning and etymology of an idiom when I came across this quote which serves as a real world example:



In Reply to: (Correcting omission) posted by R. Berg on February 25, 2003


: : : Anyone know the origin of the idiom or phrase "Throw the book at em." I realize it means prosecute someone to the fullest extent of the law, a law enforcement term, but does anyone really know where it came from and when it first began being used?





Answer



This is conversational deletion, which John Lawler has addressed on ELU.


Briefly, this is a 'rule' of conversational English which says that a speaker can chop off elementsat the beginning of an utterance which may be inferred from the context—primarily function words, "articles, dummies, auxiliaries, possessives, conditional if, and ... subject pronouns". In your example:



Does anyone have an extra apartment there?



Have stays in the infinitive, because the does is inferred. This might also be expressed



Has anyone got an extra apartment there?




If the subject is inferrable, that can go, too:



Have you got a spare pen?
Will you have a drink?



But as Prof. Lawler says,



this phenomenon only occurs in speaking English, and in other informal communication systems like email and txting that work like speech. It is not good formal written style, except for reporting dialog in a story.



audio - Recording equipment for game sounds



I was wondering if anybody knows about any good pieces of equipment fit for recording sound effects for games. If you can, also tell please me the price range for professional equipment.



Answer



Recording sound effects is an expensive process, and requires sound-proofing, expensive equipment and professional actors / real life objects.


Game studios and Film studios generally have huge sound banks from which they take basic sounds and mix, filter, compress and generally manipulate the basic sounds to their needs.


The Wilhelm Scream is a great example of re-use of sound effects in films: I never noticed it until I saw this video and now I hear it everywhere!



I believe that a lot of the work a Sound Designer has to do is find the right sounds from a sound bank and mess about with them until they're perfect for the game. This is why I think that with any basic recording equipment you could start recording and just use noise filters and compression to get some ok quality sounds. If you're starting off the best thing to know is how to use filters and apply sounds to a certain situation.


Software-wise I know SoundForge is widely used for sound editing, I really think it's a great, but expensive tool. For other software you can search the site, but Audacity is a good free sound editing tool.


You can then take a look at this question to get some free sounds : Where can I find free sounds for my game?


Equipment-wise you'd be best going to a local store and asking for professional advice; the price is dependant on your budget and the target of what you want to do exactly.


Saturday, March 28, 2015

dialect - What English is this?


The words "yer", "ter", "ernly", "der" and so on, are they Irish? Also the way the contractions are contracted, "don't" to "don'". Hagrid from Harry Potter speaks like that and actually I'm enjoying it:



  • "I don' want yer ter get hurt".



Is this just some accent or dialect, or simply really bad English? Where does it originate from?



Answer



JKR, the author, said in an interview that Hagrid's accent is from the same place she's from, West Country (England):



BPP2: Good question, good question. I've got another good question here ... what accent is Hagrid supposed to speak in?


JKR: West country ... where I come from, I come from the West country.



But "word of god" is the short, easy answer. It's much more interesting to look at the language.


According to this article about eye dialect, a big hint to what his dialect is comes from the rhoticity:




A clue to Hagrid’s regional background may come from the rhotocity implied by the post-vocalic ‘r’ in syllables where in the standard pronunciation variant the schwa should be present: ter, inter, tergether, etc. This rhotocity survived only in areas west of London, south of Birmingham and in Lancashire.



I think this corresponds to the red areas on this map.


That article also mentions several other features of Hagrid's speech:



The most consistent feature in Hagrid’s speech throughout the novels is the depiction of the velar nasal stop realized in the alveolar position (doin’, shakin’, murderin’). From the standpoint of phonetics, his speech is rich in such phenomena as the elision/dropping of final consonants due to colloquial speech register (an’, jus’, o’), h-dropping (musta bin) and the assimilation of sounds. Also found in Hagrid’s speech are a number of nonstandard spellings representing combinations of words (musta been, outta the ruins). These are so-called ‘junctional’ words.



H-dropping and dropping the "g" in "ing" are both mentioned as features of the dialect in the West Country English Wikipedia article. Another article says that the dialect is also signified by the grammar, including "double negatives, the use of meself instead myself, using personal plural pronouns (we, us) when referring to himself, and using the pronouns we, they, and you with the verb was".


I'm not entirely sure why he sometimes drops final consonants.


Finally, a lot of the nonstandard spellings (musta, outta, 'cept, etc.) aren't really tied to a specific dialect (as far as I know), but they just help to make Hagrid seem uneducated and lower class.





More sources:



java - In Slick2D, how can I generate a 2D platformer map from an image?


I'm fairly new to game development. I've been messing around with Slick2D. My map consists of rectangular objects. I want to use an image to represent my map, similar to this:


Level image


How could I generate a level from that image? As I said, I'm new to game development and am not familiar with terminology or procedures.


Bonus question: How could I also generate non-rectangular game objects from it?




graphics - How can I improve or replace my programmer art?



Say I'm a programmer who has done his own sprites or 3d models which would fall into programmer-ish kind of art. What steps can I take in order to improve or replace my own art?



Answer



1) If you want to improve try putting up your work on specialized sites like http://www.polycount.com/forum/


2) If you don't have time or insert reason here go to recruiting sites, here are a few to get you started:


http://forum.unity3d.com/viewforum.php?f=11


http://www.polycount.com/forum/forumdisplay.php?f=44


http://www.moddb.com/forum/board/recruiting-resumes


modal verbs - tense combined present with past



If I were her, I would have killed him months ago. [englishforum]



A person asks whether the expression can be acceptable. In my mother tongue, that also can be a way of saying. In English, is it a proper saying?



Answer



This is indeed exactly the right way to construct an IF...THEN expression with a counterfactual (a ‘condition contrary to fact’) in the IF clause:





  1. For your IF clause, you employ the ‘past subjunctive’ (or whatever your particular grammatical sect chooses to call it); this is the basic past form of your verb, without personal inflection — in this case, were.




  2. For your THEN clause, you start with the past form of a modal verb to mark the act as hypothetical: would ...




  3. ... together with the ‘bare infinitive’ of your lexical verb, which would be kill, except that ...





  4. ... in your case you want to say not that you would kill in the present or future, but that this hypothetical act of violence would by now be an already accomplished fact. Accordingly, you backshift the infinitive kill into the past by employing the bare perfect infinitive: have + the past participle of your lexical verb, killed.




c++ - How can I avoid jittery motion in SDL2?


I am experiencing stutter when I am moving faster than 0.1 units in my program.


jittery motion on 0.3 units


When doing exactly 0.1 units I get:


smooth motion on 0.1 For test purposes I've made the bot always heading southeast.


int main()

{

// Intialize SDL2

SDL_Init(SDL_INIT_EVERYTHING);

// Defining world & window dimensions and camera position
const int SCREEN_WIDTH{ 800 };
const int SCREEN_HEIGHT{ 480 };
const int WORLD_WIDTH{ 1000 };

const int WORLD_HEIGHT{ 1000 };
int view_x{ 0 };
int view_y{ 0 };

// Create window and default rendering context
SdlCreateWindowAndRendererWrapped wr{ SCREEN_WIDTH, SCREEN_HEIGHT };
SDL_Renderer * const ren{ &wr.get_resource_renderer() };

Object ground(ren, "assets/ground.png", 0, 0, 95);
ground.set_size(600);

ground.set_pos(200, 200);

Object bot(ren, "assets/bot.png", 40, 46, 32);
bot.set_size(200);

// Game loop
bool is_running{ true };
SDL_Event event{};
SDL_SetRenderDrawColor(ren, 0, 0, 0, 0);
while (is_running) {

Uint64 start = SDL_GetPerformanceCounter();
/*--------------Event loop--------------*/
while (SDL_PollEvent(&event))
{
if (event.type == SDL_QUIT)
{
is_running = false;
}
} // end of the event loop


/*--------------Physics loop--------------*/
static Clock clock;
clock.tick();
Vec2f v{ 0.3f, 0.3f };
bot.move(v, clock.delta);

// Screen coordinate translations
bot.set_pos(bot.get_pos().x - view_x, bot.get_pos().y - view_y);
ground.set_pos(ground.get_pos().x - view_x, ground.get_pos().y - view_y);
ground.update();

bot.update();

// Check camera bounds
if (view_x < 0)
{
view_x = 0;
}
if (view_y < 0)
{
view_y = 0;

}
if (view_x > WORLD_WIDTH - SCREEN_WIDTH)
{
view_x = WORLD_WIDTH - SCREEN_WIDTH;
}
if (view_y > WORLD_HEIGHT - SCREEN_HEIGHT)
{
view_y = WORLD_HEIGHT - SCREEN_HEIGHT;
}


// Make the camera follow the bot
view_x = bot.get_pos().x - SCREEN_WIDTH / 2;
view_y = bot.get_pos().y - SCREEN_HEIGHT / 2;

/*--------------Rendering loop--------------*/
SDL_RenderClear(ren);
ground.draw(ren);
bot.draw(ren);
SDL_RenderPresent(ren);


/*--------------Todo: Animation loop--------------*/

// Cap to 60 FPS (approx. 16.666 ms per frame -- the cycle time)
Uint64 end = SDL_GetPerformanceCounter();
float elapsed_ms{ (end - start) / static_cast(SDL_GetPerformanceFrequency()) * 1000.0f };
if (std::isless(elapsed_ms, 16.666f))
{
SDL_Delay(static_cast(floorf(16.666f - elapsed_ms)));
}
}


// Clean up used resources
SDL_Quit();
return 0;
}

I took inspiration for designing my game loop from


https://thenumbat.github.io/cpp-course/sdl2/08/08.html


The Clock class/struct was implemented exactly as Salajouni's one:


How to calculate delta time with SDL?



The camera was implemented via this method:


https://wiki.allegro.cc/index.php?title=How_to_implement_a_camera


This is how my Object struct/class looks like:


class Object
{
public:
explicit Object(SDL_Renderer * t_renderer, const std::string & t_s, const int t_x, const int t_y, const int t_sz)
{
sprite = new Sprite;
sprite->set_texture(t_renderer, t_s);

sprite->set_src_rect(t_x, t_y, t_sz, t_sz);
sprite->set_dest_rect(0, 0, t_sz, t_sz);
size = t_sz;
}
~Object()
{
delete sprite;
sprite = nullptr;
}
int get_size() const

{
return size;
}
void set_size(const int t_sz)
{
size = t_sz;
}
const Vec2i & get_pos() const
{
return pos;

}
void set_pos(const int t_x, const int t_y)
{
pos.x = t_x;
pos.y = t_y;
}
void move(const Vec2f & t_v, const Uint32 t_delta)
{
pos.x += static_cast(t_v.x * t_delta);
pos.y += static_cast(t_v.y * t_delta);

}
// Todo:
void animate()
{
}
void update()
{
sprite->set_dest_rect(pos.x, pos.y, size, size);
}
void draw(SDL_Renderer * ren)

{
SDL_RenderCopy(ren, &sprite->get_texture(), &sprite->get_src_rect(), &sprite->get_dest_rect());
}
private:
Sprite * sprite{};
Vec2i pos{};
int size{};
};

The part that supposedly needs the most attention is the physics loop. This is the part where all the motion and motion updates happen. In there I define a velocity vector and set both of its components too 0.3. After that the stutter/jitter happens. However, when I do 0.1, then it runs smoothly as shown in the pictures above. I created the window via SDL_CreateWindowAndRenderer(). So accelerated rendering should be active. I am not sure whether or not VSYNC gets activated as well when doing SDL_CreateWindowAndRenderer().



So what could possibly be the cause? Is it due to cascading rounding errors? Is it due to the active VSYNC and the manual framerate cap at the end of the loop? What is it exactly that is causing the stutter?


PS: And for the possibility that my Vector2 template class needs attention as well, there you go:


/* 2D math classes */
namespace oki2d::math2d
{
// Vector2 class definition
template
struct Vector2
{
public:

T x{};
T y{};
explicit Vector2() : x{}, y{}
{ }
explicit Vector2(T t_value) : x{ t_value }, y{ t_value }
{ }
explicit Vector2(const T t_x, const T t_y) : x{ t_x }, y{ t_y }
{ }
explicit Vector2(const Vector2 & t_v) : x{ t_v.x }, y{ t_v.y }
{ }

Vector2 & operator=(const Vector2 & t_rhs)
{
if (&t_rhs == this)
{
return *this;
}
x{ t_rhs.x };
y{ t_rhs.y };
return *this;
}

Vector2 operator-() const
{
return Vector2{ -x, -y };
}
bool operator==(const Vector2 & t_rhs) const
{
// Perform single-precision floating-point comparison (float)
if (std::is_floating_point::value)
{
return (std::fabsf(static_cast((*this).x - t_rhs.x)) < std::numeric_limits::epsilon())

&& (std::fabsf(static_cast((*this).y - t_rhs.y)) < std::numeric_limits::epsilon());
}

assert(std::is_floating_point::value == false);

// Perform integer comparison otherwise
return x == t_rhs.x && y == t_rhs.y;
}
bool operator!=(const Vector2 & t_rhs) const
{

return !((*this) == t_rhs);
}
const Vector2 & operator+=(const Vector2 & t_rhs)
{
if (&t_rhs == this)
{
return *this;
}
x += t_rhs.x;
y += t_rhs.y;

return *this;
}
Vector2 & operator-=(const Vector2 & t_rhs) const
{
if (&t_rhs == this)
{
return *this;
}
x -= t_rhs.x;
y -= t_rhs.y;

return *this;
}
Vector2 operator+(const Vector2 & t_rhs) const
{
return Vector2{ x + t_rhs.x, y + t_rhs.y };
}
Vector2 operator-(const Vector2 & t_rhs) const
{
return Vector2{ x - t_rhs.x, y - t_rhs.y };
}

Vector2 operator*(const T t_rhs) const
{
return Vector2{ x * t_rhs, y * t_rhs };
}
Vector2 operator/(const T t_rhs) const
{
return Vector2{ x / t_rhs, y / t_rhs };
}
static T double_length(const Vector2 & t_v)
{

return t_v.x * t_v.x + t_v.y * t_v.y;
}
static T length(const Vector2 & t_v)
{
return std::sqrt(t_v.x * t_v.x + t_v.y * t_v.y);
}
static Vector2 normalize(const Vector2 & t_v)
{
const T len{ length(t_v) };
return Vector2{ t_v.x / len, t_v.y / len };

}
static T dot_product(const Vector2 & t_lhs, const Vector2 & t_rhs)
{
return t_lhs.x * t_rhs.x + t_lhs.y * t_rhs.y;
}
friend std::ostream & operator<<(std::ostream & t_os, const Vector2 & t_v)
{
t_os << "(" << t_v.x << ", " << t_v.y << ")";
return t_os;
}

}; // Vector2

// Using declarations
using Vec2i = Vector2;
using Vec2f = Vector2;
} // oki2d::math2d

It is just a simple templated 2D vector math class. Nothing scary.



Answer



As Alexandre Vaillancourt in the comments suggested:




  • it is caused by the conversion from float to int (rounding errors).

  • It might also be caused by the fact that I used SDL_Delay() earlier to cap the framerate.

  • It is apparently unreliable since it doesn't wait the specified amount of time in an accurate fashion.


So the two issues that needed attention were:



  1. The conversions from float to int

  2. And possibly: Using SDL_Delay() to achieve the framerate cap**



Many tutorials on the web seem to achieve the framerate cap via a combination of SDL_Delay() and SDL_GetTicks().


I will do it differently, since this is what helped me to get a steady and fluid motion. So here it goes:



  • What to do you do instead? Well, you want 1 frame to be 16 ms long (that is, if you are aiming to realize the frequency of 60 Hz -- 60 FPS).

  • Thus, you need to wait until the 16 ms is up and only then execute your routines be it rendering, physics or animation.

  • IOW: execute the routines iff the 16 ms have passed.


I used an if block to make the execution depending on the minimum cycle time of 16 ms.


For the time measurement I made a small utility class/struct inspired by Salajouni:


struct Timer

{
Uint64 previous_ticks{};
float elapsed_seconds{};

void tick()
{
const Uint64 current_ticks{ SDL_GetPerformanceCounter() };
const Uint64 delta{ current_ticks - previous_ticks };
previous_ticks = current_ticks;
static const Uint64 TICKS_PER_SECOND{ SDL_GetPerformanceFrequency() };

elapsed_seconds = delta / static_cast(TICKS_PER_SECOND);
}
};

You need to collect/accumulate the seconds in each iteration. The steps that you need to take then are: accumulating the seconds of each iteration (hence the variable name accumulator), then make a float comparison either with an epsilon comparison (float_val1 - float_val2 < epsilon, epsilon = 0.00001f) or just use std::isgreater() or std::isless() for that.


Then, accumulating the seconds until you reach the cycle time you need e.g. 16 ms. (And again: do not forget to reset the accumulator inside the if block. It needs to recount to the 16 ms every time. I reset it with -CYCLE_TIME)


int main()
{
/*--------------Game loop--------------*/
// Timing constants

const int UPDATE_FREQUENCY{ 60 };
const float CYCLE_TIME{ 1.0f / UPDATE_FREQUENCY };
// System timing
static Timer system_timer;
float accumulated_seconds{ 0.0f };
while (is_running)
{
// Update clock
system_timer.tick();
accumulated_seconds += system_timer.elapsed_seconds;


/*--------------Event loop--------------*/
/* ... */

// Cap the framerate
if (std::isgreater(accumulated_seconds, CYCLE_TIME))
{
// Reset the accumulator
accumulated_seconds = -CYCLE_TIME;


/*--------------Physics loop--------------*/
static Timer physics_timer;
physics_timer.tick();

bot.position.x += bot.direction.x * bot.speed * physics_timer.elapsed_seconds;
bot.position.y += bot.direction.y * bot.speed * physics_timer.elapsed_seconds;

// Screen coordinate translations
/* ... */


ground.update();
bot.update();

// Camera
/* ... */

/*--------------Rendering loop--------------*/
SDL_RenderClear(ren);
ground.draw(ren);
bot.draw(ren);

SDL_RenderPresent(ren);

/*--------------Todo: Animation loop--------------*/
static Timer animation_timer;
animation_timer.tick();

/* ... */
}
}


// Clean up used resources
/* ... */
SDL_Quit();
return 0;
}

Note that I left out some code parts to make it easier to follow.


Hope it helps! :)


enter image description here


Tl;tr:




  • Accumulate floats instead of using SDL_Delay() and do not attempt to convert floats to integers carelessly.

  • Avoid conversions by constructing a timer class that is based on float.

  • Then do a float comparison via an epsilon comparison or via std::isgreater()/std::isless().


mathematics - The correct way to transform a ray with a matrix?


Playing with XNA Triangle Picking Sample I found out that it does not work well if you scale the world matrix of the objects you want to pick. When I dug into the implementation I found this comment in the RayIntersectsModel method:


        // The input ray is in world space, but our model data is stored in object
// space. We would normally have to transform all the model data by the
// modelTransform matrix, moving it into world space before we test it
// against the ray. That transform can be slow if there are a lot of
// triangles in the model, however, so instead we do the opposite.

// Transforming our ray by the inverse modelTransform moves it into object
// space, where we can test it directly against our model data. Since there
// is only one ray but typically many triangles, doing things this way
// around can be much faster.

After the comment they actually transformed the ray:


ray.Position = Vector3.Transform(ray.Position, inverseTransform);
ray.Direction = Vector3.TransformNormal(ray.Direction, inverseTransform);

With this implementation, picking suffered from "short-sightedness" if you scaled the models: it could only pick those objects, that were close enough to it. Even the ray-boundingSphere intersection test, which implementation is hardcoded into XNA, failed in the same way.



I fixed this by "doing the wrong thing" - I actually started transforming every vertex by the model's world matrix and to fix the boundingSphere test I added this code:


Quaternion rot;
Vector3 scale, trans;
modelTransform.Decompose(out scale, out rot, out trans);

float maxScale = Math.Max(Math.Max(scale.X, scale.Y), scale.Z);

boundingSphere.Center = Vector3.Transform(boundingSphere.Center, modelTransform);
boundingSphere.Radius *= maxScale;


This obviously is not optimal and I wanted to know if there is a way to actually transform the ray back to the model's space with the model's inverted matrix, while making it work for scaled matrices?


SOLUTION: Thanks to Nathan's answer I found a way to fix the ray scaling - just renormalize the ray direction:


ray.Position = Vector3.Transform(ray.Position, inverseTransform);
ray.Direction = Vector3.TransformNormal(ray.Direction, inverseTransform);
//ADD THE FOLLOWING LINE:
ray.Direction.Normalize();

SOLUTION UPDATE: As I tested the app, I found that Nathan was indeed completely right and another change was necessary. Here is the full code for the correct RayIntersectsModel() method:


static float? RayIntersectsModel(Ray ray, Model model, Matrix modelTransform,
out bool insideBoundingSphere,

out Vector3 vertex1, out Vector3 vertex2,
out Vector3 vertex3)
{
vertex1 = vertex2 = vertex3 = Vector3.Zero;
...
Matrix inverseTransform = Matrix.Invert(modelTransform);
// STORE WORLDSPACE RAY.
Ray oldRay = ray;

ray.Position = Vector3.Transform(ray.Position, inverseTransform);

ray.Direction = Vector3.TransformNormal(ray.Direction, inverseTransform);
ray.Direction.Normalize();

// Look up our custom collision data from the Tag property of the model.
Dictionary tagData = (Dictionary)model.Tag;

if (tagData == null)
{
throw new InvalidOperationException(
"Model.Tag is not set correctly. Make sure your model " +

"was built using the custom TrianglePickingProcessor.");
}

// Start off with a fast bounding sphere test.
BoundingSphere boundingSphere = (BoundingSphere)tagData["BoundingSphere"];

if (boundingSphere.Intersects(ray) == null)
{
// If the ray does not intersect the bounding sphere, we cannot
// possibly have picked this model, so there is no need to even

// bother looking at the individual triangle data.
insideBoundingSphere = false;

return null;
}
else
{
// The bounding sphere test passed, so we need to do a full
// triangle picking test.
insideBoundingSphere = true;


// Keep track of the closest triangle we found so far,
// so we can always return the closest one.
float? closestIntersection = null;

// Loop over the vertex data, 3 at a time (3 vertices = 1 triangle).
Vector3[] vertices = (Vector3[])tagData["Vertices"];

for (int i = 0; i < vertices.Length; i += 3)
{

// Perform a ray to triangle intersection test.
float? intersection;

RayIntersectsTriangle(ref ray,
ref vertices[i],
ref vertices[i + 1],
ref vertices[i + 2],
out intersection);

// Does the ray intersect this triangle?

if (intersection != null)
{
// RECOMPUTE DISTANCE IN WORLD SPACE:
Vector3 vertexA = Vector3.Transform(vertices[i], modelTransform);
Vector3 vertexB = Vector3.Transform(vertices[i+1], modelTransform);
Vector3 vertexC = Vector3.Transform(vertices[i+2], modelTransform);

RayIntersectsTriangle(ref oldRay,
ref vertexA,
ref vertexB,

ref vertexC,
out intersection);

// If so, is it closer than any other previous triangle?
if ((closestIntersection == null) ||
(intersection < closestIntersection))
{
// Store the distance to this triangle.
closestIntersection = intersection;


// Store the three vertex positions in world space.
vertex1 = vertexA;
vertex2 = vertexB;
vertex3 = vertexC;
}
}
}

return closestIntersection;
}

}

Answer



Transforming the ray position and direction by the inverse model transformation is correct. However, many ray-intersection routines assume that the ray direction is a unit vector. If the model transformation involves scaling, the ray direction won't be a unit vector afterward, and should likely be renormalized.


However, the distance along the ray returned by the intersection routines will then be measured in model space, and won't represent the distance in world space. If it's a uniform scale, you can simply multiply the returned distance by the scale factor to convert it back to world-space distance. For non-uniform scaling it's trickier; probably the best way is to transform the intersection point back to world space and then re-measure the distance from the ray origin there.


Looking for a good actionscript 3 book



I've been looking for a book on actionscript3 development, but while there's tons of books out there, nobody seems to want to recommend any specific one.


One book I've been pointed towards is the cookbook by o'reilly, but it, like most books out there, seems to be based on the assumption that I'm using flexbuilder or flash. Instead, I'm "just" using flashdevelop, or the free SDK directly.


I've also been told to just go with the api reference and live with it. I could do that, I suppose, but I'd rather have a book that gives me the big picture. Kind of like with cocoa, there's the hillegrass' book, or the red book of OpenGL.


So, what would be the actionscript3 book out there?



Answer



ActionScript 3.0 Animation


Foundation ActionScript 3.0 Animation: Making Things Move! is an excellent book covering ActionScript 3.0 from a programmer's perspective focusing on game or animation programming. Lots of practical examples in topics like velocity, acceleration, friction, easing, collision detection, rotation, basic physics, particles, and forward/inverse kinematics. I'd highly recommend it.


ActionScript 3.0 Animation on Amazon



syntax - What is a modal verb, really?


Is "have to" a modal verb? tells us, in two conflicting answers, that it's either a modal verb or it's not a modal verb.


Then I just realized that there are never any definitions provided for what a modal is, only examples of modal verbs, or examples of what they do. It's an all familiar thing: "modals are used to express obligation, necessity, permission . . . ".


To narrow down and clarify the question,



  • What properties should an item possess to qualify as a modal?


  • Are modals and modal verbs different things? In other words, are there modals that are not verbs?

  • Are there modal verbs that aren't auxiliary verbs?


I realize there might be conflicting definitions, but there are areas most experts would agree upon, and for the shadier parts of the definition, only one definition would suffice.




Friday, March 27, 2015

grammar - What verb form should come after "ALREADY", "Present" or "Past Participle"?


What is the correct verb form to use with the word "already"? Should I use the present form of the verb or past participle form of the verb? Example:



  1. Already share your video in my Facebook.


  2. Already shared your video in my Facebook.


Which one is correct?



Answer



You can say "Already shared your video in my Facebook profile" to mean "I have already shared your video on my Facebook profile" or even "Already shared it." if the meaning would be clear in context. This kind of terse I'm-busy-gotta-run writing often employs ellipsis.


xna - How do I check collision when firing bullet?


I'm currently creating 2D game from top perspective. I'm having problems with bullets. Yes, I currently simulate their movement so user can see them (about 2x ). Moving them with


// this is static

Direction = new Vector2(mouse.X, mouse.Y) - new Vector2(player.x, player.y);
Direction.Normalize();
Speed = 900f;

//this is called in Draw(GameTime gameTime)
Position += Direction * 9f * Speed * (float)gameTime.ElapsedGameTime.TotalSeconds;

actually it's working perfectly, however, they're too fast so I can't check their collision each frame and I need to re-play their way each frame. How would I do it? And how would I do this available for both server (which doesn't use XNA as I want to port it to Linux later) and client (using XNA)?


Here's an image which shows the problem (1. bullet is before target 2. bullet is behind target, so it can no longer intersect the target. My goal is to calculate whether it intersected the target before)




I have almost forgot to mention! These objects are moving, these are player, that means that user can get out of the bullet's trace



Answer



The simplest way to solve this problem is to use fixed time steps in XNA and tell it you want more fps for your update method, this makes your update code run more often and with smaller time steps which in turn means that your bullet will travel a shorter distance each time you check for collisions. This is essentially what Nick mentioned above, but with less changes needed to your code.


In the constructor for your Game class put the following two lines of code:


base.IsFixedTimeStep = true;
base.TargetElapsedTime = new TimeSpan((long)(TimeSpan.TicksPerSecond / 600f));

600f Is the number of frames per second you want, most likely your already running with fixed time step (this is the default in XNA) at 60fps, by changing it to 600 your bullets will be checked 10 times more often.


Once you understood how this works in practice however I would recommend that you do this manually by dividing the ElapsedGameTime you receive in your update with the speed of your bullet and then update the bullet multiple times per update, this way you don't force your entire game to run at higher frames per second just to accommodate your bullets moving fast. Hope this helps.





PS. Swept objects is an interesting solution to this problem, while probably overkill for your scenario, it involves sweeping the object (the bullet) along its movement vector and then comparing this new shape to the other objects for collision. Sweeping a circle would give you a capsule for instance. You would normally do this with the bounding boxes of the sprites since its easier to calculate, but using algorithms like GJK you can do it with arbitrary shapes as well (in both 2D and 3D).


Thursday, March 26, 2015

Unity C# - Change color, or material of specific line segments on Line Renderer


enter image description here


We're making a game that plays with a LineRenderer, and we want to give it a different visual feedback relative to the side of the screen that it's on.


We want to change the feedback of the specific line segments that are on the other side (when they past the middle point).


The API lets you map the segments positions, but isn't there a way that you can change the material/color of that specific segment? I just see that you can do it, but only for the whole Line Renderer, not segment-by-segment.




c++ - Simple scripting language for "one-liner"-type scripts?



Can you recommend a scripting language which allows me to easily parse "one-liner" types of scripts (they're just commands, really)?


For example, a C/C++ function which simply sets the value of a 2-dimensional vector (position, for example):


void SetVector(Vector2 &vector, float x, float y)
{
vector.x = x;

vector.y = y;
}

And in the scripting language, the entire script should be able to just be one line long. For example, this would be a script to set the components of vector "a" to 123.0f and 456.0f (x and y) (syntax of the language doesn't really matter, just as an example):


set_vector a 123,456 

I'd use these short scripts to do simple things like change the position of objects during runtime (for debugging or other purposes), create simple config files for all kinds of entities which would go like:


bomb.script:


set_damage 1000
set_range 250

set_texture Data/Sprites/bomb.png

etc.


From a superficial glance, Lua, AngelScript etc. seem to be a little bit bloated for my simple needs (Although I must admit I haven't put tons of time into those two). Can you recommend something better?



Answer



As far as simple, "one-liner" scripts are concerned, Lua is a perfectly legitimate choice. Function binding is easy, even with the native API (though there are plenty of helpers for this). It's syntax is pretty easy to learn. Oh, and the runtime is tiny, if that sort of thing matters to you. You won't even have to include its standard libraries, so it'll be even smaller than the compiled static library.


Lua also makes a good data-description language, much like JSON or XML.


Also, don't cut yourself short in terms of room to grow. Right now, you may only want "configuration scripts." But you'd be surprised how easily you might want logic to start creeping into those configurations. Maybe you spawn certain entities based on game state. Or change the texture of something based on game state. Whatever.


Lua can handle all of these kinds of things quite readily.


It is much easier to have too much power and not use it, than it is to have less power and then suddenly need more. Lua's power will be there if you use it, and if you don't, then you won't care. It'll still be quick and simple.



card game - CCG Design - How to define text descriptions


I'm designing a CCG/TCG game (with the Kotlin language), and I am looking for advice regarding the text of the cards.



First of all, here is an example of one card, defined in XML:


          type="event"
image="tex-event"
metalCost="1"
energyCost="1">

Whenever you destroy a creature, you will gain 3 energy.

Your stronghold does not produce energy.








ScavengeResOnDestroy, and ResFlux are direct mappings to class names. Scavenge refers to the first line in the text, and ResFlux to the second.


Before I go on and design a whole lot of cards this way (then again maybe this issue would become a lot clearer AFTER I would design some more cards..) Should I attempt to tie/"hard"-code the text descriptions into the Trait-classes?


The thing is, I find it quite hard problem to "generate" the text for a Trait.. For example ScavengeResOnDestroy could have several targets:




  • Self (Whenever you destroy a creature...)

  • Opponent (Whenever an opponent destroys a creature...)

  • Both (Whenever a player destroys a creature...)


And to complicate things even further, I could very well see a possibility to modify ScavengeResOnDestroy, like so:



  • Whenever you destroy a creature you do not control..

  • Whenever an opponent destroys any creature..."



Now I haven't done much research on the topic yet... but I guess I'd need to build some kind of "sentence generator" (I don't know the correct term)?


Do you think I should stick with simply defining the text for each card, as in the example, or possibly, under the ? And perhaps generate the text only for the simplest traits? Something else?




punctuation - Shouldn't the comma be omitted? And is this sentence consistent in the tense?


Original sentence:




Prince Andrew, looking again at that genealogical tree, shook his head, laughing as a man laughs who looks at a portrait so characteristic of the original as to be amusing.



(War and Peace, Tolstoy, English translation)


In the sentence above, shouldn't the comma before laughing be omitted if this is about the participle clause denoting "an action that happens at the same time in the past"? I mean it should be



Prince Andrew…shook his head laughing as a man…



Plus, shouldn't the verb laugh after a man be in the past tense in order to be in accordance (agreement) with shook his head?


You know, the whole sentence should be consistent….




xna - Why does GameComponent have an Initialize() method?


I'm not sure if this exact question has been asked before, but all I could find were questions about how to work with GameComponent, or why something wasn't working.



I understand that you overload the Initialize, Draw, and Update methods with your own code, and XNA will then invoke them when the time is right. But what exactly is the purpose of Initialize? If you're writing something that extends from GameComponent, why wouldn't you just put all your initialize logic in the constructor?


For example, if my classes have a reference type that needs to be initialized, such as a collection or dictionary, I make of point of doing so in the constructor, and this pattern follows me into my GameComponent class design.




EDIT: Having worked with it a bit more, it looks like the primary purpose is to allow for more flexibility.


For example, IGameComponent merely exposes one method -- Initialize() -- and by implementing this, you are allowed to plug it into the Game.Components collection. This means that GameComponent, with all of its various properties, even the reference to Game, is just an implementation detail.


Another use I've found is when one component contains a collection of other components. In this case, the constructor for the container is called, and you now have empty collections. This pattern allows you to populate the collections, then call the initialize on both the parent and children objects.


I'm still stumped as to what I would specifically put into Initialize, but I understand the reasoning behind it.



Answer



The reason you wouldn't put the initialize logic in the constructor is because Initialize is the first point where you can be sure that GraphicsDevice is set up.


Recall that, while your constructor for your Game-derived class may create GraphicsDeviceManager, the graphics device itself is only created when Game.Run() is called (an instance method, so it requires an instance, which requires calling the constructor).





A more interesting question would be: Why is there Initialize and LoadContent? Why not just LoadContent?


Partly this goes back to XNA 1.0 (there's a certain degree of backwards-compatibility in the API). Back then, there was no LoadContent method, but instead there was a LoadGraphicsContent method. That method would get called whenever the graphics device was reset, in order to reload all of your GPU resources - such as textures (you were expected to handle this correctly).


(Aside: A device reset can happen if the user minimises the window, locks their workstation, etc.)


From XNA 2.0 onwards, XNA will automatically reload textures (and other GPU resources) for you - and the method was renamed LoadContent. Although, in the very rare case where the automatic-reload fails, XNA will still call LoadContent to attempt to recreate the resources from scratch.


(Most people don't bother writing LoadContent carefully enough to handle this situation correctly.)


In either case, having an Initialize method gives you a separate code-path that is only called once at startup, when the graphics device is available, but is not part of the reset-handling.


The methods in (Drawable)GameComponent are the way they are largely to mirror what is in Game, but the same reasoning as above applies.




You might also find this answer interesting.



Wednesday, March 25, 2015

meaning in context - What does "the seeds of change" mean?


Planting the seeds of change The Green Wave project aims to help meet the goals of the United Nations Convention on Biological Diversity by educating young people on the importance of biodiversity. In an ambitious program, students from schools all across the world have been invited to mark the International Day for Biodiversity each year by planting a single tree of an indigenous or locally important species.


What does the headline "Planting the seeds of change" mean?




mathematics - How to think about 2D scaling/rotation transformations


This is a kind of embarassing question to me since I'm getting more in-depth with XNA but some times my way of thinking about things in my head contradicts an example and I need to re-think it to make it work.


So let's say I am just working with a sprite that is 10 wide and 10 high.


Scaling:


When I apply a scaling matrix, am I just shifting the points back or scaling the X-axis and Y-axis in the LOCAL coordinate space of that sprite? I ask because on this page:


http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2D/Coll_Detection_Matrices.php


and


http://www.riemers.net/eng/ExtraReading/matrices_geometrical.php


It shows the X and Y of the local axis the sprite is in being scaled down. So in my head, I think that a scaling means that you scale down the local axis space where the sprite is located which makes it smaller.



Rotation:


When I do a rotation matrix, am I rotating the local x and y axis of the sprite around the world origin or rather do I think about it as rotating the local coordinate space entirely around the world origin?


Translation:


This to me is just moving that scaled image with those smaller x and y axis to some other point in the world.



Answer



For me, the easiest way to think about this is to remember that each coordinate space can be expressed in world coordinates by a vector from the world origin to the local origin which I just call the position vector and two (in 2D or three in 3D) basis vectors. Any translation moves only the head of the position vector (the tail is always anchored at the world origin). Rotation and scaling can affect all three (or four) vectors depending on the specific rotation or scale as well as the order you have applied transformations already.


This, along with my knowledge that these transforms don't change the object in local space (a point at (1,0,0) is always at (1,0,0) in local space, no matter what the basis vectors are in world space), only its appearance in world space, allow me to visualize these transforms with relative ease.


In essence you are thinking the right thing, but you must be aware that these transforms don't actually affect the local coordinate space of an object, only its mapping into world space.


For example:


I have a an object defined by {(0,0), (1,0), (1,1), (0,1)} in local space. The Position vector is (0,0), and the basis vectors are (1,0) and (0,1). In other words, my transform matrix is just an identity matrix and the object looks like a square with bottom left corner at the origin both in the world view and in a local view.



If I were to apply a translation, say by the vector (2,3), only the position vector changes. My object is still defined as {(0,0), (1,0), (1,1), (0,1)} in local space, but the object in world space has been moved up and over. The object still looks like a square in both views however.


If I were to then apply a scale, say by (1/2, 1/3), the position vector would be changed to (1,1) and the basis vectors would be changed to (1/2, 0) and (0,1/3). Note that my object is STILL defined as {(0,0), (1,0), (1,1), (0,1)} in local space, but if I view it in world space it is a rectangle that has been shifted up and to the right, not a square with bottom left corner at the origin as it is defined in local space.


Rotations are very similar, but the changes in the basis vectors are not as easy to calculate in my head so I shall do a very simple example with a 45 degree rotation counter clockwise around the origin. The position vector would be changed to (0,sqrt(2)), while the basis vectors would be changed to ~(-0.35, 0.35) and ~(0.24,0.24). The object would now be a slanted diamond moved up in world space.


Simple past, Present perfect Past perfect

Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...