Tuesday, July 31, 2018

camera - Creating a seamlessly looping 2D multiplayer level?


A discussion came up recently about how to create a 2D side-scrolling multiplayer game that can have an looping level design (Think of Starbound and how their worlds are looping).


I thought the simplest way would be to have a rectangular map with trigger zones which could teleport players from one side to another. However the obvious issue with this approach is the case of having multiple players at edge of the map at once. You don't want to just teleport players in front of each other and you would need a way to transport players without having other players disappear.


To append this idea and fix the issue I came up with the following: have a trigger zone (red square in image) where players will be able to see a "clone zone" (green square). In this green square, objects from the opposite side of the trigger zone would be copied into its corresponding clone zone (can be seen with A & B shapes). When a player gets to the starting edge of the "clone zone" they are teleported to the other side of the map.


image


In this example Player 2 would think they are seeing Player 1, however they would actually be seeing his clone and vice versa.


This seemed a bit extreme and complex for the problem at hand. My question now is to know if this solution is a good approach to tackling the issue, or is there a simpler way to solve this problem?



Answer



This system with all these triggers sounds a bit too complicated and error prone.


You could wrap the position of the player using modulo with something like playerPositionX = playerPositionX % mapWidth



This way when your player reaches playerPosition == mapWidth the playerPosition will reset back to 0.


This solution could be extended with the whole rendering system.


grammar - Can the verb "wrap" describe the current condition of something?



Suppose there are two situations:




  1. A line of people waiting outside some store is getting very long and the line goes around a street corner.




  2. A ribbon is glued around a water pipe.




Suppose the conditions in both situations are current conditions. I am exploring the possible usage of the present tense of the verb "wrap" for those two situations:




The line of people wraps around the street corner.
The ribbon wraps around the pipe.



Is the usage of the present-tense "wrap" to describe those two current conditions correct?




Monday, July 30, 2018

architecture - Entity System and rendering


Okey, what I know so far; The entity contains a component(data-storage) which holds information like; - Texture/sprite - Shader - etc


And then I have a renderer system which draws all this. But what I don't understand is how the renderer should be designed. Should I have one component for each "visual type". One component without shader, one with shader, etc?


Just need some input on whats the "correct way" to do this. Tips and pitfalls to watch out for.



Answer




This is a difficult question to answer because everyone has their own idea about how an entity component system should be structured. The best I can do is share with you some of the things I have found to be most useful for me.


Entity


I take the fat-class approach to ECS, probably because I find extreme methods of programming to be highly inefficient (in terms of human productivity). To that end, an entity to me is an abstract class to be inherited by more specialized classes. The entity has a number of virtual properties and a simple flag that tells me whether or not this entity should exist. So relative to your question about a render system, this is what the Entity looks like:


public abstract class Entity {
public bool IsAlive = true;
public virtual SpatialComponent Spatial { get; set; }
public virtual ImageComponent Image { get; set; }
public virtual AnimationComponent Animation { get; set; }
public virtual InputComponent Input { get; set; }
}


Components


Components are "stupid" in that they don't do or know anything. They have no references to other components, and they typically have no functions (I work in C#, so I use properties to handle getters/setters - if they do have functions, they are based around retrieving data that they hold).


Systems


Systems are less "stupid", but are still dumb automatons. They have no context of the overall system, have no references to other systems and hold no data except for a few buffers that they may need to do their individual processing. Depending on the system, it may have a specialized Update, or Draw method, or in some cases, both.


Interfaces


Interfaces are a key structure in my system. They are used to define what a System can process, and what an Entity is capable of. The Interfaces that are relevant for rendering are: IRenderable and IAnimatable.


The interfaces simply tell the system which components are available. For example, the rendering system needs to know the bounding box of the entity and the image to draw. In my case, that would be the SpatialComponent and the ImageComponent. So it looks like this:


public interface IRenderable {
SpatialComponent Component { get; }

ImageComponent Image { get; }
}

The RenderingSystem


So how does the rendering system draw an entity? It's actually quite simple, so I'll just show you the stripped down class to give you an idea:


public class RenderSystem {
private SpriteBatch batch;
public RenderSystem(SpriteBatch batch) {
this.batch = batch;
}

public void Draw(List list) {
foreach(IRenderable obj in list) {
this.batch.draw(
obj.Image.Texture,
obj.Spatial.Position,
obj.Image.Source,
Color.White);
}
}
}


Looking at the class, the render system doesn't even know what an Entity is. All it knows about is IRenderable and it is simply given a list of them to draw.


How It All Works


It may help to also understand how I create new game objects and how I feed them to the systems.


Creating Entities


All game objects inherit from Entity, and any applicable interfaces that describe what that game object can do. Just about everything that is animated on screen looks like this:


public class MyAnimatedWidget : Entity, IRenderable, IAnimatable {}

Feeding the Systems


I keep a list of all entities that exist in the game world in a single list called List gameObjects. Each frame, I then sift through that list and copy object references to more lists based on interface type, such as List renderableObjects, and List animatableObjects. This way, if different systems need to process the same entity, they can. Then I simply hand those lists to each one of the systems Update or Draw methods and let the systems do their work.



Animation


You might be curious how the animation system works. In my case you may want to see the IAnimatable interface:


public interface IAnimatable {
public AnimationComponent Animation { get; }
public ImageComponent Image { get; set; }
}

The key thing to notice here is the ImageComponent aspect of the IAnimatable interface is not read-only; it has a setter.


As you may have guessed, the animation component just holds data about the animation; a list of frames (which are image components), the current frame, the number of frames per second to be drawn, the elapsed time since the last frame increment, and other options.


The animation system takes advantage of the rendering system and image component relationship. It simply changes the image component of the entity as it increments the animation's frame. That way, the animation is rendered indirectly by the rendering system.



prepositions - Use of "while" vs "as"


What does the word "as" describe in the following sentence?




As I was about to get out of bed, I heard a noise coming from the kitchen downstairs.





  • Does "as" imply simultaneous short actions?




  • Is "as" just background information?





  • Can "while" be used instead?





Answer



"While" will indicate an action in progress, but "as", as you use it here, is simply a marker of the time when you were getting up (just like @Mowzer said), hence it's the right word to use, in my opinion.


Sunday, July 29, 2018

meaning - "Listen for" to mean "left with a feeling"


The other day I was given an answer:



In your sentence I would use give, give does mean a present, a gift



I was going to give you another book




Here is a warning: I am listening for the 'but' at the end of that because of the 'was'.



I was going to give you another book, but Judy said your reading pile is already a year long




The sentence with "listen for" struck me as odd. It sounds as if the person is listening for the "but" right now. I'd at least say:



The sentence left me listening for the "but".




Or rather:



The sentence left me with the feeling that there must be some sort of continuation. That it was cut off in the middle. So here's the—one might say—full version:



I'm not saying that the original sentence is incorrect. I'm trying to understand why it's okay to put the idea that way. Can you explain?



Answer



First I'll explain the relevant sense of for in other contexts, and then I'll explain why the sentence that you found odd makes sense.


"Looking for", "listening for", etc.


Here are some similar uses of for. From your comments, it sounds like you already understand listen for, but I'm providing these just to be sure.




I reached for my keys on the dresser when I woke up, but they weren't there. I spent an hour looking for my keys until I remembered that I left them in my coat pocket last night.


A doctor can apply a stethoscope to a patient's abdomen and listen for gurgling sounds. If there is only silence, the doctor will know that there is a possible blockage. [Source: Take Five Minutes by Ruth Foster (2001), slightly edited.]


Woodpeckers are thought to listen for insects, and owls rely on sound when hunting their prey. [Source: Creating a Bird-Watcher's Journal by Claire Walker Leslie and Charles Roth (1999).]


I've been standing here waiting for a bus for twenty minutes now.



These all indicate directing attention toward finding something decided or imagined in advance, by looking, listening, reaching, etc. The same idea occurs in phrases like "looking for a job".


Your example


Suppose that you are sitting indoors and you hear the squeal of tires against asphalt outside, suggesting that the driver of a car on the road outside has just slammed on the brakes to avoid an accident. Most likely, you then listen for the sound of a crash or an impact, because you know that that is likely to happen next, and you are interested to know if an accident occurs and, if so, how bad it is.


"I was going to _____" very often is the first half of a sentence where the second half explains why I did not _____. For example:




I was going to cook breakfast, but we were out of eggs.


I was going to marry him, until I discovered that he was wanted for assault and battery in four states.


I was going to give you a copy of Waiting for Godot, but then I remembered that you hate surrealism.



So, when we hear "I was going to _____," we are primed to next hear a conjunction introducing the part of the sentence that explains why _____ didn't happen. That's what listening for or waiting for meant in WendyG's answer.


Present continuous


If you are wondering why she wrote in the present continuous tense, "I am listening for the 'but' at the end of that," that is a way to narrowly locate her feeling of expectation right at the moment when the sentence "I was going to give you another book" ends. The present continuous tense primarily means action in progress, but commonly takes on variations of that meaning in different contexts. In this context, the primary meaning is altered to "something happening at a very specific moment" as well as "very briefly", i.e. the action is transient. You are right that "The sentence left me listening for the 'but'" means the same thing. Here's another example of this use of the present continuous:



When the car hit the front of the house, I looked up—and the wall is cracking! Luckily it held together.




The shift from past tense to present continuous is a way to make the action seem "more present" to the listener. It's somewhat informal.


c# - How can I handle game-state updates in an MMO while the player is logged out?


I have a plan to build MMO strategy game like Goodgame Empire or Travian for Windows Phone. I want to program it in C# Monogame (because I have some good experience with it).


But I still can't figure out how to do game-server communication.


My plan is communicate with database through web service (WCF).


For example, if player build some building then the game saves information about that building through a web service method to a database.



But what if the player isn't logged in?


For example farms produce some food every hour. How to count how much food the player has when he logs into game after some time?


If you have some idea please let me know ;)



Answer



Your server would have some kind of background process (could also be implemented as a timer or a thread) which runs at regular intervals and updates all players. The process would run daily, hourly or every few minutes depending on how often you want the players farms to update and generate resources.


Any online players would get notified immediately that their resource count changed. Any offline players would get notified about their new resource count as soon as they are logged in.


An alternative solution would be to update the resource count of an offline player as soon as they log in. You check how much time has passed since they last logged out and update their resource count accordingly. But keep in mind that this makes offline interaction difficult. When other players can see the resources of another player, they won't get an up-to-date number before the player logged in. Checking whether or not the resources of a player grows could be used to see when they are online which might violate their privacy and/or might give the other player knowledge about their opponents online behavior you don't want them to have. Another problem you might run into is that it might get difficult to make sure that offline- and online processing work in exactly the same way. Subtle differences between them could lead to exploitable bugs.


Unity c#: Interface object never equals null


I've created an interface class for some mechanic I'm using to interact with things in my game. Now, I noticed that checking if that value is null never returns true.

Here's a screenshot of where this happens:


enter image description here


The error I get is the following:


MissingReferenceException: The object of type 'InteractiveItem' has been destroyed but you are still trying to access it.
Your script should either check if it is null or you should not destroy the object.
InteractiveItem.CurrentGameObject () (at Assets/Scripts/Interactive/InteractiveItem.cs:10)
Interactive.ManualStopInteract () (at Assets/Scripts/Interactive/Interactive.cs:30)

The line where this fails is:


if (IsInteracting() && this.interacting.CurrentGameObject().GetComponent() == null && !MainReferences.UIReferences.IsAnyMenuWindowOpen()) {

// do something
}

The IsInteracting() check is for some reason returning true here because the last line (InteractiveItem.cs:10) is in the method interacting.CurrentGameObject()


I don't get how this can happen or how I should solve it. As far as I know, an interface is a nullable type.



Answer



I have a hypothesis for what might be causing this...


First, I need to explain something about null in Unity: when an instance descended from UnityEngine.Object (including GameObject or MonoBehaviour) gets Destroy()ed, it does not actually become null.


(In C#, a variable will only hold a value of null if it's uninitialized, or if it's been assigned myVariable = null explicitly - nothing can delete the object out from under you as long as any active script holds a reference to it)


Once the Destroy() takes effect at the end of the current frame's updates (before rendering), references to the instance will compare as equal to null, because Unity overloads the == operator for UnityEngine.Object. But the reference is still non-null. Try this example:



IEnumerator NullTest()
{
var myObject = new GameObject();
Destroy(myObject);

Debug.Log("Is it null immediately?"
+ (myObject == null)); // false

yield return null; // Wait one frame for Destroy() to take effect


Debug.Log("NOW is it null? "
+ (myObject == null)); // true

Debug.Log("But is it *really* null? "
+ System.Object.ReferenceEquals(myObject, null)); // false
}

This "pseudo-null" stub is what lets Unity understand what you were trying to do and give you a tailored error message:



MissingReferenceException: The object of type 'InteractiveItem' has been destroyed but you are still trying to access it.




All real null values look alike, so if it was a real null Unity wouldn't be able to tell it came from a destroyed object.


Okay, so with that background, why is your code giving this confusing result?


Without seeing more of your code it's hard to say for sure, but I have a suspicion that you're implementing this interface on a MonoBehaviour, and your null check doesn't know it's working with a class descended from UnityEngine.Object - all it knows is that it implements the InteractiveItem interface.


So this line:


return interacting != null;

is using the standard comparison, like System.Object.ReferenceEquals(). That replies "Well, no, it's been Destroy()ed, but it's not literally null" so IsInteracting() returns true.


Then your code proceeds and tries to call


this.interacting.CurrentGameObject()


and Unity steps in to say "Why are you trying to access a member of a Destroy()ed MonoBehaviour?" and throws an error.


So, some possible fixes...



  • Make InteractiveItem descend from MonoBehaviour if everything
    implementing it is going to be a MonoBehaviour anyway, so the
    correct comparison is used automatically.

  • Try casting interacting to (something descended from...)
    UnityEngine.Object before checking for null to catch when it's
    been Destroy()ed.


  • Make your interactive objects aware that someone is trying to
    interact with them, and use an OnDestroy method to notify the
    interactor when the interaction is no longer available.


adjectives - Grammatically correct? 'big fat funny cats' and 'fat silly cats'


As someone who didn't emphasize on learning grammar at all, I still sometimes find a case that calls for grammar rules.


I was asked which one is correct: fat silly cats or silly fat cats?


Intuitively, I found nothing wrong with both of them. So I searched the web.


The Adjective Order I found from British Council's website is: opinion, size, shape, age, colour, nationality, and material.


This implies that the correct answer should be silly fat cats.


However, based on my googling (is that even a word?), the use of silly fat cats is rare, compared to funny fat cats. But then again, funny fat cats doesn't sound right to me. I would personally prefer big fat funny cats. (I noticed that some people on the web wrote it as big, fat, funny cats, while others simply omitted commas).


So which are the correct usages? (if both are passable, which one is preferred)



  • silly fat cats, or fat silly cats


  • funny fat cats, or fat funny cats

  • funny big fat cats, or big fat funny cats

  • funny really big fat cats, or really big fat funny cats



Answer




silly fat cats (or) fat silly cats



Silly fat cats is more euphonious. Both are grammatically correct.


'Fat cats' idiomatically means rich people, or rich powerful people. So it could be that you are calling those rich people silly, as opposed to calling those silly cats plump.




funny fat cats (or) fat funny cats



The rhythm of these phrases is about the same, so either.



funny big fat cats (or) big fat funny cats



Big fat funny cats rolls off the tongue. It has much better cadence. So definitely big fat funny cats.



funny really big fat cats, or really big fat funny cats




Again, the cadence is the deciding factor - really big fat funny cats.


really in this situation means "very", and modifies the adjective which immediately follows, so you are right to move the adjective "big" along with it...


Compare



A really expensive black leather handbag/purse.



This bag is black, as well as being very expensive.



An expensive really black leather handbag/purse.




This bag is expensive, as well as being very black.


In OpenGL, how can I discover the depth range of a depth buffer?


I am doing a GL multi-pass rendering app for iOS. The first pass renders to a depth buffer texture. The second pass uses the values in the depth buffer to control the application of a fragment shader. I want to rescale the values in the depth buffer to something useful but before I can do that I need to know the depth value range of the depth buffer values. How do I do this?



Answer



The range of the values written to the depth buffer are whatever you want them to be. Typically they fall in the 0-to-1 range. The actual value that is written in to the depth buffer is computed during the viewport transformation, based on the Z value of vertex in NDC space (after the perspective divide by w in clip space).


The NDC depth value (Z, after the perspective divide by W) is scaled by depth portion of the viewport transformation (which brings your X and Y coordinates into a coordinate space you'd associate with pixels in the window), and then scaled by (2^n-1) -- that's meant to be read as "two to the power of n" -- n is the bit precision of the depth buffer. The resulting value is written to the depth buffer.



OpenGL splits definition of the viewport transformation matrix into glViewport and glDepthRange calls. glDepthRange is what controls the scale factor responsible for determining the depth range you are asking about. You can call glGetFloatv with the GL_DEPTH_RANGE selector to recover the current range. This will allow you to make use of the range without assuming it's 0 to 1 (although 99.9% of the time, in practice, nobody ever changes it).


Further reading, if you want some insight on how to reconstruct the math to follow the Z value all the way from eye space to the depth buffer.



grammar - How does 'the better to ——' equate with 'So as to —— better'?




7. the better to ——   =   So as to —— better:



This comment revealed this grammatical confusion of mine.


1. On the left-hand side (abbreviated as LHS) above, 'the better' is a noun.


2. Yet on the right-hand side, 'better' is an adverb, modifying the infinitive 'to ——' ? This contradicts 1?
3. Also, 'so as to' just seems to have adventitiously appeared, since it's absent on the LHS?


So how can these two phrases be equal?




Saturday, July 28, 2018

What is the grammar rule here? How do I explain this in a simple way?


So my Japanese friend asked me why we can say


Is he gone? Is he finished?


But we can't say


Is he changed? Is he arrived? Is he fallen? Is he shrunk? Is he departed? Is he died? Is he melted?


I'm wondering what the grammar here is. changed, fallen, departed can be used as adjectives so shouldn't it be okay to use it in that kind of sentence? But I feel a entence like "Is he fallen?" sounds unnatural.


How do I go about explaining this in an easy-to-understand way to my Japanese friend?


I appreciate any help! Thank you.




What do "position.transform" and "Input.GetAxis" mean in Unity?


I am making the transition from Game Maker to Unity, but I feel lost when I look at Unity's programming.


When I followed a tutorial to make basic movements, I had this code:


public Vector3 playerPos;
void Update ()
{

float V_X = transform.position.x;
V_X += Input.GetAxis("Horizontal") / 2;
playerPos = new Vector3(V_X, 0, 0);
transform.position = playerPos;
}

A few things are confusing me here:




  • What is transform? And why is position something that is a part of it? Why not have the position of the objects represented by just x rather than transform.position.x?





  • Is GetAxis(...) a command that has a return value? Why is in "sub-categorized" under Input? And why not detect the key presses directly?






Robust Architecture for Flash Games?


What's a good architecture for Flash games? I've read that MVC is great, with symbols just used purely for views (set states, use the class to manage which state/frame to show).


I would presumably then stick all my code in one or more .AS files, use object-oriented programming, try to unit test whatever I can, and make my art and symbols purely view-state stuff with no code.


Has anyone tried this? On anything other than a small game? How do these big flash RPGs and MMORPGs architect their games?




Answer



Although MVC is a pattern that is widely used, I think it's not really appropriate for games. When developing games you'll also deal with sound, physics, networking etc. where do they belong to? Model, View or Controller?


You'll find that model and controller (sometimes even the view) are most often better combined in one class and/or that there are other patterns that are better suited for game development than MVC, for example the component pattern.


As a flash developer, I encourage you to separate your code from your assets though. If you're using an IDE like Flash Builder, there's no way around that anyways.


Also try to separate things that represent different layers of "information/logic", like sound, rendering, ai etc. to make these components reusable. You can leverage the flash event-system or use a signal-slot implementation to loosely couple these objects. Eg. The player class (or maybe even the collision-handler) dispatches a signal whenever the player collects a coin. By connecting the "collect-coin-signal" to the sound-class, the appropriate sound can be played without explicitly calling any sound-related code inside the player class. Of course this signal/event could also be connected to the keep-track-of-score-class etc. This architecture allows you to easily attach more components later on and/or swap them with different ones without rewriting huge portions of your existing code.


I think that most of the patterns commonly used for game development also apply when developing games in ActionScript. So just go ahead and look for some good game-development patterns and make use of them in your next Flash project.


Btw. I have never used Unit-Tests for a game, but maybe that's just me :)


grammar - How to choose tenses in story telling?


When someone asks me to describe a character from a story or explain the story briefly, should I use "past tense" or "present tense"



Below is the story about the character of the novel "Devi Chaudhurani" (written by Bankimchandra Chattopadhyay) from which I was asked a question in the exam: Here is the link that expalins the novel and the character briefly here




  • Prafulla is married but is shunned by her wealthy father-in-law, Haraballabh, because of a spat between him and her father on the day of her wedding. By custom prevalent at that time, a girl, once married, could not be divorced or remarried. Heartbroken at the fate of his only child, her father died after a few years, leaving the family in penury.




  • Prafulla takes the drastic step to flee in the middle of the night to find the house of her in-laws whom she has never known, without any money, with knowledge of only the name of the village and name of her father-in-law.




Here is the question that was asked in the exam "Who is Prafulla? Describe her character briefly." Should I answer in past tense or present tense in the answer?




  • Prafulla is the main character of the novel Devi Chaudhurani. She lives/lived in a small village of Bengal. She is idolized in the novel as an Inspiration to women in Bengal that time. Prafulla is/was shunned by Haraballabh, Her father-in-law. This made/makes her father feel heart broken and depressed which leads/led him to death. This made/makes Prafulla more stronger than ever. By the character of Prafulla the author described/describes a hard and successful life of a woman.


I think both tenses are correct but not sure. Could you please explain? Can I use both tenses?




c# - XNA shield effect with a Primative sphere problem


I'm having issue with a shield effect i'm trying to develop. I want to do a shield effect that surrounds part of a model like this: http://i.imgur.com/jPvrf.png


I currently got this: http://i.imgur.com/Jdin7.png (The red likes are a simple texture a black background with a red cross in it, for testing purposes: http://i.imgur.com/ODtzk.png where the smaller cross in the middle shows the contact point)



This sphere is drawn via a primitive (DrawIndexedPrimitives)


This is how i calculate the pieces of the sphere using a class i've called Sphere


(this class is based off the code here: http://xbox.create.msdn.com/en-US/education/catalog/sample/primitives_3d)


public class Sphere { // During the process of constructing a primitive model, vertex // and index data is stored on the CPU in these managed lists. List vertices = new List(); List indices = new List();


    // Once all the geometry has been specified, the InitializePrimitive
// method copies the vertex and index data into these buffers, which
// store it on the GPU ready for efficient rendering.
VertexBuffer vertexBuffer;
IndexBuffer indexBuffer;
BasicEffect basicEffect;


public Vector3 position = Vector3.Zero;
public Matrix RotationMatrix = Matrix.Identity;

public Texture2D texture;

///
/// Constructs a new sphere primitive,
/// with the specified size and tessellation level.
///


public Sphere(float diameter, int tessellation, Texture2D text, float up, float down, float portstar, float frontback)
{
texture = text;
if (tessellation < 3)
throw new ArgumentOutOfRangeException("tessellation");

int verticalSegments = tessellation;
int horizontalSegments = tessellation * 2;

float radius = diameter / 2;


// Start with a single vertex at the bottom of the sphere.
AddVertex(Vector3.Down * ((radius / up) + 1), Vector3.Down, Vector2.Zero);//bottom position5

// Create rings of vertices at progressively higher latitudes.
for (int i = 0; i < verticalSegments - 1; i++)
{
float latitude = ((i + 1) * MathHelper.Pi /
verticalSegments) - MathHelper.PiOver2;


float dy = (float)Math.Sin(latitude / up);//(up)5
float dxz = (float)Math.Cos(latitude);

// Create a single ring of vertices at this latitude.
for (int j = 0; j < horizontalSegments; j++)
{
float longitude = j * MathHelper.TwoPi / horizontalSegments;

float dx = (float)(Math.Cos(longitude) * dxz) / portstar;//port and starboard (right)2
float dz = (float)(Math.Sin(longitude) * dxz) * frontback;//front and back1.4


Vector3 normal = new Vector3(dx, dy, dz);

AddVertex(normal * radius, normal, new Vector2(j, i));
}
}

// Finish with a single vertex at the top of the sphere.
AddVertex(Vector3.Up * ((radius / down) + 1), Vector3.Up, Vector2.One);//top position5


// Create a fan connecting the bottom vertex to the bottom latitude ring.
for (int i = 0; i < horizontalSegments; i++)
{
AddIndex(0);
AddIndex(1 + (i + 1) % horizontalSegments);
AddIndex(1 + i);
}

// Fill the sphere body with triangles joining each pair of latitude rings.
for (int i = 0; i < verticalSegments - 2; i++)

{
for (int j = 0; j < horizontalSegments; j++)
{
int nextI = i + 1;
int nextJ = (j + 1) % horizontalSegments;

AddIndex(1 + i * horizontalSegments + j);
AddIndex(1 + i * horizontalSegments + nextJ);
AddIndex(1 + nextI * horizontalSegments + j);


AddIndex(1 + i * horizontalSegments + nextJ);
AddIndex(1 + nextI * horizontalSegments + nextJ);
AddIndex(1 + nextI * horizontalSegments + j);
}
}

// Create a fan connecting the top vertex to the top latitude ring.
for (int i = 0; i < horizontalSegments; i++)
{
AddIndex(CurrentVertex - 1);

AddIndex(CurrentVertex - 2 - (i + 1) % horizontalSegments);
AddIndex(CurrentVertex - 2 - i);
}

//InitializePrimitive(graphicsDevice);
}

///
/// Adds a new vertex to the primitive model. This should only be called
/// during the initialization process, before InitializePrimitive.

///

protected void AddVertex(Vector3 position, Vector3 normal, Vector2 texturecoordinate)
{
vertices.Add(new VertexPositionNormal(position, normal, texturecoordinate));
}


///
/// Adds a new index to the primitive model. This should only be called
/// during the initialization process, before InitializePrimitive.

///

protected void AddIndex(int index)
{
if (index > ushort.MaxValue)
throw new ArgumentOutOfRangeException("index");

indices.Add((ushort)index);
}



///
/// Queries the index of the current vertex. This starts at
/// zero, and increments every time AddVertex is called.
///

protected int CurrentVertex
{
get { return vertices.Count; }
}

public void InitializePrimitive(GraphicsDevice graphicsDevice)

{
// Create a vertex declaration, describing the format of our vertex data.

// Create a vertex buffer, and copy our vertex data into it.
vertexBuffer = new VertexBuffer(graphicsDevice,
typeof(VertexPositionNormal),
vertices.Count, BufferUsage.None);

vertexBuffer.SetData(vertices.ToArray());


// Create an index buffer, and copy our index data into it.
indexBuffer = new IndexBuffer(graphicsDevice, typeof(ushort),
indices.Count, BufferUsage.None);

indexBuffer.SetData(indices.ToArray());

// Create a BasicEffect, which will be used to render the primitive.
basicEffect = new BasicEffect(graphicsDevice);
//basicEffect.EnableDefaultLighting();
}


///
/// Draws the primitive model, using the specified effect. Unlike the other
/// Draw overload where you just specify the world/view/projection matrices
/// and color, this method does not set any renderstates, so you must make
/// sure all states are set to sensible values before you call it.
///

public void Draw(Effect effect)
{
GraphicsDevice graphicsDevice = effect.GraphicsDevice;


// Set our vertex declaration, vertex buffer, and index buffer.
graphicsDevice.SetVertexBuffer(vertexBuffer);

graphicsDevice.Indices = indexBuffer;

graphicsDevice.BlendState = BlendState.Additive;

foreach (EffectPass effectPass in effect.CurrentTechnique.Passes)
{

effectPass.Apply();

int primitiveCount = indices.Count / 3;

graphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0,
vertices.Count, 0, primitiveCount);

}
graphicsDevice.BlendState = BlendState.Opaque;
}



///
/// Draws the primitive model, using a BasicEffect shader with default
/// lighting. Unlike the other Draw overload where you specify a custom
/// effect, this method sets important renderstates to sensible values
/// for 3D model rendering, so you do not need to set these states before
/// you call it.
///

public void Draw(Camera camera, Color color)

{
// Set BasicEffect parameters.
basicEffect.World = GetWorld();
basicEffect.View = camera.view;
basicEffect.Projection = camera.projection;
basicEffect.DiffuseColor = color.ToVector3();
basicEffect.TextureEnabled = true;
basicEffect.Texture = texture;



GraphicsDevice device = basicEffect.GraphicsDevice;
device.DepthStencilState = DepthStencilState.Default;

if (color.A < 255)
{
// Set renderstates for alpha blended rendering.
device.BlendState = BlendState.AlphaBlend;
}
else
{

// Set renderstates for opaque rendering.
device.BlendState = BlendState.Opaque;
}

// Draw the model, using BasicEffect.
Draw(basicEffect);
}

public virtual Matrix GetWorld()
{

return /*world */ Matrix.CreateScale(1f) * RotationMatrix * Matrix.CreateTranslation(position);
}
}



public struct VertexPositionNormal : IVertexType
{
public Vector3 Position;
public Vector3 Normal;

public Vector2 TextureCoordinate;


///
/// Constructor.
///

public VertexPositionNormal(Vector3 position, Vector3 normal, Vector2 textCoor)
{
Position = position;
Normal = normal;

TextureCoordinate = textCoor;
}

///
/// A VertexDeclaration object, which contains information about the vertex
/// elements contained within this struct.
///

public static readonly VertexDeclaration VertexDeclaration = new VertexDeclaration
(
new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0),

new VertexElement(12, VertexElementFormat.Vector3, VertexElementUsage.Normal, 0),
new VertexElement(24, VertexElementFormat.Vector2, VertexElementUsage.TextureCoordinate, 0)
);

VertexDeclaration IVertexType.VertexDeclaration
{
get { return VertexPositionNormal.VertexDeclaration; }
}

}


A simple call to the class to initialise it. The Draw method is called in the master draw method in the Gamecomponent.


My current thoughts on this are:



  1. The direction of the weapon hitting the ship is used to get the middle position for the texture

  2. Wrap a texture around the drawn sphere based on this point of contact


Problem is i'm not sure how to do this. Can anyone help or if you have a better idea please tell me i'm open for opinion? :-) Thanks.



Answer



Looks like you're on the right track. I did this before in a game, and instead of changing where on the texture the impact is centered, I merely rotated the sphere. I also used a procedural shader instead of a texture. Here's some of the code I used (public domain, do whatever you want with it).



EDIT: Uploaded a video of it: http://www.youtube.com/watch?v=DqKkaJHf1gg


(NOTE: This code is for XNA 3.1, and has a couple calls to internal functions, so it won't work out of the box. It also does a lot of allocation, so will not perform well on the 360)


[SingletonRenderer]
public sealed class ShieldRenderer : Renderer
{
private static readonly Texture3D _noiseTexture = SpacerGame.load("textures/noise");

///
/// Since the default render state culls back-facing edges (i.e. those with normals opposite the
/// viewer), the shader is not see-through. This could be fixed by order-independent trasnparency

/// or disabling backface culling, but since we're only going to be seeing the shader from one
/// direction, it's easier (and faster, and provides more control) to fake the effect by rotating
/// the sphere a bit towards the camera.
///

private const float ROTATION_Y = -MathHelper.Pi * 3 / 8;
private const float NOISE_SPEED = 0.15f;
private const float IMPACT_TIME = 1.0f;

private readonly Sphere _sphere;
private readonly ShieldShader _shader;

private readonly ReaderWriterCollection, ShieldImpact> _impacts;
private readonly ShieldParameters _params;

public ShieldRenderer() : base(RenderPass.SHIELDS)
{
_params = Effects.initShieldRenderer(this);
_sphere = new Sphere(1, 20);
_shader = new ShieldShader
{
noise = _noiseTexture,

speed = NOISE_SPEED,
};
_impacts = new ReaderWriterCollection, ShieldImpact>();
}

public override void draw(DeltaT dt)
{
_impacts.synchronize();
foreach(ShieldImpact impact in _impacts)
{

// Update time
if(impact.startTime == 0) impact.startTime = dt.totalActual;
else impact.time += dt.dtAt(impact.target.pos);

// Kill off dead impacts
if (impact.time > IMPACT_TIME)
{
_impacts.Remove(impact);
continue;
}


// Skip offscreen targets
if(!impact.target.bounds.isPartiallyOnScreen())
continue;

_shader.worldViewProj = impact.baseTransform *
impact.target.pos.toScreenWvpMatrix();
_shader.startTime = impact.startTime;
_shader.color = impact.color;
_shader.time = impact.time / IMPACT_TIME;


_shader.begin();
_sphere.draw(_shader.shield);
_shader.end();
}
}

protected override void dispose()
{
base.dispose();

_sphere.Dispose();
_shader.Dispose();
}

protected override void finalize()
{
base.finalize();
_impacts.synchronize();
_impacts.Clear();
_impacts.Dispose();

}

public void addImpact(Entity target, float direction, float shieldStrength)
{
// TODO -- figurre out right shield radius
_impacts.Add(new ShieldImpact
{
target = target,
color = _params.colors.sample(shieldStrength).ToVector3(),
baseTransform = Matrix.CreateScale(target.size.X) *

Matrix.CreateRotationY(ROTATION_Y) *
Matrix.CreateRotationZ(-direction),
});
}

private sealed class ShieldImpact : ISimpleListNode
{
public Entity target;
public Matrix baseTransform;
public Vector3 color;

public float startTime;
public float time;

ShieldImpact ISimpleListNode.next { get; set; }
ShieldImpact ISimpleListNode.prev { get; set; }
}
}

public sealed class ShieldParameters
{

public ColorGradient colors;
}

Shader:


#include "common.fxh"

// @params
float4x4 _worldViewProj;
texture _noise;
float _time;

float _startTime;
float _speed;
float3 _color;
// @end

sampler sNoise = sampler_state { texture = <_noise>; magfilter = ANISOTROPIC; minfilter = ANISOTROPIC; mipfilter = ANISOTROPIC; AddressU = WRAP; AddressV = WRAP; };

PixelInfo shieldVS(float4 inPos : POSITION, float2 inUv : TEXCOORD)
{
PixelInfo p;

p.uv = inUv;
p.pos = mul(inPos, _worldViewProj);
return p;
}

static const float NOISINESS = 1;
static const float NOISE_SCALE_PRE_EXP = 1.25;
static const float NOISE_SCALE_POST_EXP = 3;
static const float NOISE_EXP = 4;


static const float DISTANCE_EXP_MIN = 0.015;
static const float DISTANCE_EXP_MAX = 0.06;
static const float DISTANCE_VALUE_CLAMP = 0.1;
static const float DISTANCE_SCALE = 144;

static const float TIME_SCALE_PRE_EXP = 3;
static const float TIME_SCALE_POST_EXP = 1;
static const float TIME_EXP = 3;

float4 shieldPS(PixelInfo p) : COLOR0

{
// Get some noise-sampled noise (a la the background)
float3 vpos;
float3 spos = float3(p.uv * NOISINESS + float2(_startTime, _startTime), _time * _speed);
vpos.x = tex3D(sNoise, spos + 0.00).r - 0.5;
vpos.y = tex3D(sNoise, spos + 0.33).r - 0.5;
vpos.z = tex3D(sNoise, spos + 0.67).r - 0.5;
float sample = tex3D(sNoise, vpos).r;
sample = pow(sample * NOISE_SCALE_PRE_EXP, NOISE_EXP) * NOISE_SCALE_POST_EXP;


// Fade out with time
float timeFactor = (pow((1 - _time) * TIME_SCALE_PRE_EXP, TIME_EXP)) * TIME_SCALE_POST_EXP;

// Glow more closer to the center; spread out over time
float d = distance(float2(smoothstep(0.125, 0.875, p.uv.x), p.uv.y), float2(0.5, 0.5));
float distanceFactor = 1 - pow(d, lerp(DISTANCE_EXP_MIN, DISTANCE_EXP_MAX, _time));
distanceFactor = smoothstep(DISTANCE_VALUE_CLAMP, 1, distanceFactor * distanceFactor * DISTANCE_SCALE);

// If alpha > 1, multiply the color by it to get an HDRish effect
float alpha = sample * timeFactor * distanceFactor;

return float4(alpha > 1 ? alpha * _color : _color, saturate(alpha));
}

technique shield
{
// @passes
pass shield { VertexShader = compile vs_1_1 shieldVS(); PixelShader = compile ps_2_0 shieldPS(); }
// @end
}


Code for sphere:


/// 
/// Simple sphere mesh generator.
///

public sealed class Sphere : IDisposable
{
private readonly VertexBuffer _vertexBuf;
private readonly IndexBuffer _indexBuf;
private readonly VertexDeclaration _vertexDecl;
private readonly int _nVerticies;

private readonly int _nFaces;

public Sphere(float radius, int slices)
{
_nVerticies = (slices + 1) * (slices + 1);
int nIndicies = 6 * slices * (slices + 1);

var indices = new int[nIndicies];
var vertices = new VertexPositionNormalTexture[_nVerticies];
float thetaStep = MathHelper.Pi / slices;

float phiStep = MathHelper.TwoPi / slices;

int iIndex = 0;
int iVertex = 0;
int iVertex2 = 0;

for (int sliceTheta = 0; sliceTheta < slices + 1; sliceTheta++)
{
float r = (float) Math.Sin(sliceTheta * thetaStep);
float y = (float) Math.Cos(sliceTheta * thetaStep);


for (int slicePhi = 0; slicePhi < (slices + 1); slicePhi++)
{
float x = r * (float) Math.Sin(slicePhi * phiStep);
float z = r * (float) Math.Cos(slicePhi * phiStep);

vertices[iVertex].Position = new Vector3(x, y, z) * radius;
vertices[iVertex].Normal = Vector3.Normalize(new Vector3(x, y, z));
vertices[iVertex].TextureCoordinate = new Vector2((float)slicePhi / slices,
(float)sliceTheta / slices);

iVertex++;

if (sliceTheta != (slices - 1))
{
indices[iIndex++] = iVertex2 + (slices + 1);
indices[iIndex++] = iVertex2 + 1;
indices[iIndex++] = iVertex2;
indices[iIndex++] = iVertex2 + (slices);
indices[iIndex++] = iVertex2 + (slices + 1);
indices[iIndex++] = iVertex2;

iVertex2++;
}
}
}

GraphicsDevice device = RenderManager.current.device;
_vertexBuf = new VertexBuffer(device, typeof(VertexPositionNormalTexture), _nVerticies, BufferUsage.None);
_indexBuf = new IndexBuffer(device, typeof(int), nIndicies, BufferUsage.None);
_vertexDecl = new VertexDeclaration(device, VertexPositionNormalTexture.VertexElements);
_vertexBuf.SetData(vertices, 0, vertices.Length);

_indexBuf.SetData(indices, 0, indices.Length);
_nFaces = nIndicies / 3;
}

public void draw(EffectPass pass)
{
GraphicsDevice device = RenderManager.current.device;
device.Indices = _indexBuf;
device.VertexDeclaration = _vertexDecl;
device.Vertices[0].SetSource(_vertexBuf, 0, VertexPositionNormalTexture.SizeInBytes);


pass.Begin();
device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, _nVerticies, 0, _nFaces);
pass.End();
}

public void Dispose()
{
_vertexBuf.Dispose();
_indexBuf.Dispose();

_vertexDecl.Dispose();
}
}

Friday, July 27, 2018

actionscript 3 - How can I make a character walk on uneven walls in a 2D platformer?


I want to have a playable character who can "walk" on an organic surface at any angle, including sideways and upside-down. By "organic" levels with slanted and curved features instead of straight lines at 90-degree angles.


I'm currently working in AS3 (moderate amateur experience) and using Nape (pretty much a newbie) for basic gravity-based physics, to which this walking mechanic will be an obvious exception.


Is there a procedural way to do this kind of walk mechanic, perhaps using Nape constraints? Or would it be best to create explicit walking "paths" following the contours of the level surfaces and use them to constrain the walking movement?



Answer



Here is my complete learning experience, resulting in a pretty much functional version of the movement I wanted, all using Nape's internal methods. All of this code is within my Spider class, pulling some properties from its parent, a Level class.


Most of the other classes and methods are part of the Nape package. Here's the pertinent part of my import list:


import flash.events.TimerEvent;
import flash.utils.Timer;


import nape.callbacks.CbEvent;
import nape.callbacks.CbType;
import nape.callbacks.InteractionCallback;
import nape.callbacks.InteractionListener;
import nape.callbacks.InteractionType;
import nape.callbacks.OptionType;
import nape.dynamics.Arbiter;
import nape.dynamics.ArbiterList;
import nape.geom.Geom;

import nape.geom.Vec2;

First, when the spider is added to the stage, I add listeners to the Nape world for collisions. As I get further into development I will need to differentiate collision groups; for the moment, these callbacks will technically be run when ANY body collides with any other body.


        var opType:OptionType = new OptionType([CbType.ANY_BODY]);
mass = body.mass;
// Listen for collision with level, before, during, and after.
var landDetect:InteractionListener = new InteractionListener(CbEvent.BEGIN, InteractionType.COLLISION, opType, opType, spiderLand)
var moveDetect:InteractionListener = new InteractionListener(CbEvent.ONGOING, InteractionType.COLLISION, opType, opType, spiderMove);
var toDetect:InteractionListener = new InteractionListener(CbEvent.END, InteractionType.COLLISION, opType, opType, takeOff);


Level(this.parent).world.listeners.add(landDetect);
Level(this.parent).world.listeners.add(moveDetect);
Level(this.parent).world.listeners.add(toDetect);

/*
A reference to the spider's parent level's master timer, which also drives the nape world,
runs a callback within the spider class every frame.
*/
Level(this.parent).nTimer.addEventListener(TimerEvent.TIMER, tick);


The callbacks change the spider's "state" property, which is a set of booleans, and record any Nape collision arbiters for later use in my walking logic. They also set and clear toTimer, which allows the spider to lose contact with the level surface for up to 100ms before allowing world gravity to take hold again.


    protected function spiderLand(callBack:InteractionCallback):void {
tArbiters = callBack.arbiters.copy();
state.isGrounded = true;
state.isMidair = false;
body.gravMass = 0;
toTimer.stop();
toTimer.reset();
}


protected function spiderMove(callBack:InteractionCallback):void {
tArbiters = callBack.arbiters.copy();
}

protected function takeOff(callBack:InteractionCallback):void {
tArbiters.clear();
toTimer.reset();
toTimer.start();
}


protected function takeOffTimer(e:TimerEvent):void {
state.isGrounded = false;
state.isMidair = true;
body.gravMass = mass;
state.isMoving = false;
}

Finally, I calculate what forces to apply to the spider based on its state and its relationship to the level geometry. I'll mostly let the comments speak for themselves.


    protected function tick(e:TimerEvent):void {
if(state.isGrounded) {

switch(tArbiters.length) {
/*
If there are no arbiters (i.e. spider is in midair and toTimer hasn't expired),
aim the adhesion force at the nearest point on the level geometry.
*/
case 0:
closestA = Vec2.get();
closestB = Vec2.get();
Geom.distanceBody(body, lvBody, closestA, closestB);
stickForce = closestA.sub(body.position, true);

break;
// For one contact point, aim the adhesion force at that point.
case 1:
stickForce = tArbiters.at(0).collisionArbiter.contacts.at(0).position.sub(body.position, true);
break;
// For multiple contact points, add the vectors to find the average angle.
default:
var taSum:Vec2 = tArbiters.at(0).collisionArbiter.contacts.at(0).position.sub(body.position, true);
tArbiters.copy().foreach(function(a:Arbiter):void {
if(taSum != a.collisionArbiter.contacts.at(0).position.sub(body.position, true))

taSum.addeq(a.collisionArbiter.contacts.at(0).position.sub(body.position, true));
});

stickForce=taSum.copy();
}
// Normalize stickForce's strength.
stickForce.length = 1000;
var curForce:Vec2 = new Vec2(stickForce.x, stickForce.y);

// For graphical purposes, align the body (simulation-based rotation is disabled) with the adhesion force.

body.rotation = stickForce.angle - Math.PI/2;

body.applyImpulse(curForce);

if(state.isMoving) {
// Gives "movement force" a dummy value since (0,0) causes problems.
mForce = new Vec2(10,10);
mForce.length = 1000;

// Dir is movement direction, a boolean. If true, the spider is moving left with respect to the surface; otherwise right.

// Using the corrected "down" angle, move perpendicular to that angle
if(dir) {
mForce.angle = correctAngle()+Math.PI/2;
} else {
mForce.angle = correctAngle()-Math.PI/2;
}
// Flip the spider's graphic depending on direction.
texture.scaleX = dir?-1:1;
// Now apply the movement impulse and decrease speed if it goes over the max.
body.applyImpulse(mForce);

if(body.velocity.length > 1000) body.velocity.length = 1000;

}
}
}

The real sticky part I found was that the angle of movement needed to be in the actual desired direction of movement in a multiple contact point scenario where the spider reaches a sharp angle or sits in a deep valley. Especially since, given my summed vectors for the adhesion force, that force will be pulling AWAY from the direction we want to move instead of perpendicular to it, so we need to counteract that. So I needed logic to pick one of the contact points to use as the basis for the angle of the movement vector.


A side effect of the adhesion force's "pull" is a slight hesitance when the spider reaches a sharp concave angle/curve, but that's actually kind of realistic from a look-and-feel standpoint so unless it causes problems down the road I'll leave it as is. If I need to, I can use a variation on this method to calculate the adhesion force.


    protected function correctAngle():Number {
var angle:Number;

if(tArbiters.length < 2) {
// If there is only one (or zero) contact point(s), the "corrected" angle doesn't change from stickForce's angle.
angle = stickForce.angle;
} else {
/*
For more than one contact point, we want to run perpendicular to the "new" down, so we copy all the
contact point angles into an array...
*/
var angArr:Array = [];
tArbiters.copy().foreach(function(a:Arbiter):void {

var curAng:Number = a.collisionArbiter.contacts.at(0).position.sub(body.position, true).angle;
if (curAng < 0) curAng += Math.PI*2;
angArr.push(curAng);
});
/*
...then we iterate through all those contact points' angles with respect to the spider's COM to figure out
which one is more clockwise or more counterclockwise, depending, with some restrictions...
...Whatever, the correct one.
*/
angle = angArr[0];

for(var i:int = 1; i if(dir) {
if(Math.abs(angArr[i]-angle) < Math.PI)
angle = Math.max(angle, angArr[i]);
else
angle = Math.min(angle, angArr[i]);
}
else {
if(Math.abs(angArr[i]-angle) < Math.PI)
angle = Math.min(angle, angArr[i]);

else
angle = Math.max(angle, angArr[i]);
}
}
}

return angle;
}

This logic is pretty much "perfect," inasmuch as so far it seems to be doing what I want it to do. There is a lingering cosmetic issue, however, in that if I try to align the spider's graphic to either the adhesion or movement forces I find that the spider ends up "leaning" in the direction of movement, which would be ok if he were a two-legged athletic sprinter but he's not, and the angles are highly susceptible to variations in the terrain, so the spider jitters when it goes over the slightest bump. I may pursue a variation on Byte56's solution, sampling the nearby landscape and averaging those angles, to make the spider's orientation smoother and more realistic.



javascript - Get points on a line between two points



I'm making a simple space game in JavaScript, but now I've hit a wall regarding vectors.


The game view is top-down on a 2d grid. When the user clicks on the grid, the space ship will fly to that spot.


So, if I have two sets of points:


{ x : 100.2, y : 100.6 }; // the ship
{ x : 20.5, y : 55.95 }; // the clicked coordinates

If the game loop ticks at 60 iterations per second, and the desired ship velocity is 0.05 points per tick (3 points per second), how do I calculate the new set of coordinates for the ship for each tick of the game loop?


p.s. I do not want to account for inertia, or multiple vectors affecting the ship, I just want the ship to stop whatever it is doing (i.e. flying one way) and move to the clicked coordinates at a static speed.



Answer



In Pseudocode:



speed_per_tick = 0.05
delta_x = x_goal - x_current
delta_y = y_goal - y_current
goal_dist = sqrt( (delta_x * delta_x) + (delta_y * delta_y) )
if (dist > speed_per_tick)
{
ratio = speed_per_tick / goal_dist
x_move = ratio * delta_x
y_move = ratio * delta_y
new_x_pos = x_move + x_current

new_y_pos = y_move + y_current
}
else
{
new_x_pos = x_goal
new_y_pos = y_goal
}

Thursday, July 26, 2018

word choice - Which way: One and one ARE two? One and one IS two?


Which verb is grammatically correct when used to describe addition?



  • One and one are two.

  • One and one is two.




unity - How can I reduce a bezier curve using a slider till all points converge to it's original midpoint


I am using the code from the question here to create bezier curves (I didn't think it was necessary to repost the code here since I haven't made any significant changes to it yet).


I have a symmetric bezier curve (please see image) that I want to reduce till all points converge at the midpoint of the original curve. I would like to preserve it's symmetry while it moves. I would like achieve this using a float slider (with values from 0 to 1) in editor mode.


enter image description here



Answer



Assuming you have a cubic Bezier curve type with control points a, b, c, d, and methods to evaluate, given a parameter 0 <= t <= 1, both:





  • a position on the curve at t


    ie. \$\vec p = (1-t)^3 \vec a + 3 t (1-t)^2 \vec b + 3 t^2(1-t) \vec c + t^3 \vec d\$




  • the first derivative of that position with respect to the parameter t


    ie. \$\frac {\delta \vec p} {\delta t} = 3 (1-t)^2 (\vec b - \vec a) + 6 t (1-t) (\vec c - \vec b) + 3 t^2 (\vec d - \vec c)\$




Then we can choose new control points for an arbitrary interval start <= t <= end on this curve like so:


static CubicBezier IntervalFromTo(CubicBezier curve, float start, float end) {


float scale = (end - start)/3f;

var a = curve.PositionAt(start);

var b = a + curve.DerivativeAt(start) * scale;

var d = curve.PositionAt(end);

var c = d - curve.DerivativeAt(end) * scale;


return new CubicBezier(a, b, c, d);
}

If you have a size parameter that runs between 0 (just the midpoint) and 1 (the whole original curve), then you can compute your symmetrical subset as a special case:


subsetCurve = IntervalFromTo(originalCurve, 0.5f - 0.5f * size, 0.5f + 0.5f * size);

I recommend that you keep your originalCurve unchanged throughout this manipulation, and make a fresh subsetCurve from the original each time something changes, rather than overwriting your original with each change. Keeping this separation will ensure you don't get unwanted vibrations or degradation due to accumulating rounding errors.


3d - getting bone base and tip positions from a transform matrix?


I need this for a Blender3d script, but you don't really need to know Blender to answer this.


I need to get bone head and tip positions from a transform matrix read from a file. The position of base is the location part of the matrix, length of the bone (distance from base to tip) is the scale, position of the tip is calculated from the scale (distance from bone base) and rotation part of the matrix.


So how to calculate these?


bone.base([x,y,z]) # x,y,z - floats
bone.tip([x,y,z])


pronunciation - How should I pronounce "live music"?


How should I pronounce "live" when I mean, for example, "live broadcasting" or "live music"? Is it "laiv" or "liv"?




articles - Using 'most': with or without 'the'?


I thought we need "the" before the word most. Such as in:



She is the most beautiful girl.




But, I came across this sentence:



Most incredible is how many Islamic scholars have popped up over the past few years.



Why isn't there "the" before the word "most" ? I know "the" is not needed if the word afterward is an adverb as in:


I like it most.


However, "incredible" isn't being used as an adverb here.



Answer



Before a singular count noun modified by 'most' plus an adjective you need a determiner, and this can include either the definite article or indefinite article. So technically, one could say both




She is the most beautiful girl.



and



She is a most beautiful girl.



For the difference in usage, see a most talented writer -- grammar -- why not "the most talented writer"?.


For an adjective only, without any following noun, you use most by itself, because there is no noun.



Most beautiful was the girl I knew in high school.



The girl next door was most beautiful.


Most incredible is how many...



We have to be careful because in some situations, the noun can be ellided or implied. It should be clear from context if this is the case:



The girl next door was the most beautiful (girl).


The most astonishing (fact) is that zombies are real.



Curiously, to use an implied or ellided noun with the indefinite article seems doubtful as to its grammaticality.




?The girl next door was a most beautiful.



The question mark (?) marks this sentence as having questionable grammaticality. That is not all people will judge it to be grammatical. Thinking about it, I can think of contexts where it would be grammatical, but it would be rare.


terminology - What is a Game Engine?



I am new to Game Development, All I have developed is some 2d Games, using Game Maker by YoYo Games. There game development is much more easy, just as simple as Drag and Drop.


But, Now I wish to evolve in Game Development, and wanna try some hands with the common programming languages like, Java, C++ or C. In order to achieve this, I came around the first topic, Game Engine.


So, What is Game Engine.?


That is a broad question, with various answers. I came out with some conclusion after reading various links,




Wikipedia:


A game engine is a system designed for the creation and development of video games. The core functionality typically provided by a game engine includes a rendering engine for 2D or 3D graphics, a physics engine or collision detection (and collision response), sound, scripting, animation, artificial intelligence, networking, streaming, memory management, threading, localization support, and a scene graph.


GameCareerGuide.com


It exists to abstract the (sometime platform-dependent) details of doing common game-related tasks, like rendering, physics, and input, so that developers (artists, designers, scripters and, yes, even other programmers) can focus on the details that make their games unique.


Engines offer reusable components that can be manipulated to bring a game to life. Loading, displaying, and animating models, collision detection between objects, physics, input, graphical user interfaces, and even portions of a game's artificial intelligence can all be components that make up the engine.



Now what I understand is, Game Engine takes care of all the common work, like physics, loading etc...


As far my question is concerned, what is a Game Engine (Programmatically)?


Is it a Library? With pre-defined functions and classes, which can be inherited? Or what so ever, what is it?




Answer



A library simply refers to a collection of classes/functions. There is really not much to it. A Game engine can be released as a library, it's not going to change anything. Afterall software is build from a collection of classes and/or functions.


Where a game engine refers to the basic software of your game. When you speak of a game engine there is at least an architecture involved that handles the bare minimum of the game structure( Entities/Gameobject, rendering, etc ). A lot of the technical stuff is automated for you.


Game engines dictate how certain things are done ( adding scenes, entities/gameobject, loading assets, etc ). All you have to do is add gamelogic and give it an artistic flair ( assets-sound/models/shaders/whatever ).


Game engines exists so that they can boost production. Why or how they do certain things in certain ways is arbitrary ( programming styles as well as work environments can play a big role. To each their own).


Wednesday, July 25, 2018

opengl - Texture antialiasing?


In my Minecraft-clone style game, blocks are textured with a border that is lighter then the block color. See picture below:


Blocks, with GL_NEAREST


To achieve this effect without the textures being blurry I use this code:



glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);

Without setting the MAG_FILTER to GL_NEAREST, it looks like this:


Without GL_NEAREST


This is ugly and blurry. The upper picture (with GL_NEAREST enabled) is obviously superior. My issue is that the edges in the upper picture are aliased (it is more obvious if you view the photo full-size.)


Is there any way to anti-alias textures? It does not look very good if the borders are aliased like this. Alternatively, is there any way to get rid of the aliasing on the borders?


Thank you for any help!



Answer



Well, the simplest way to do it is use a higher-resolution texture, together with a full mip chain, trilinear filtering, and you'll probably want to turn on anisotropic filtering as well; that should give you smooth edges without (as much) over-blurring.


Another approach would be to use distance field textures, which are specifically for two-tone images (in the paper, for text rendering, where the two tones are the text color and transparent). They are able to get sharp, but aliased edges at any level of magnification. This should work for minification too although they only touch on it in the paper.



xna 4.0 - How do you add equipment to a 3D character model using XNA/Blender?



I've watched quite a few Blender tutorials, but I have yet to see examples of how to swap out sub-models. So my question is, how do you swap out equipment on a character model? Specifically I’d like to be able to dynamically add/swap clothing and items held by a character model in XNA. Ideally the items would follow the bone structure of the character model.


For example; starting with a naked character in XNA, I’d like to be able to have the character hold an axe and wear chainmail that follows the character’s animations. I’d also like to be able to switch this equipment for a sword and plate mail at any time during the game. I’d rather not create a model for each equipment combination. Is there some way to just add the bones (say a sword bone) and meshes of one model to another model bone (say a right hand bone)?


Thanks…



Answer



For items that are carried by the character like a sword, shield or similar, you can create a special hand-bone, where you attach the new item/geometry at runtime.


When it comes to different armors/clothing, this is going to be slightly more complicated. An approach that is widely used (I think also by WoW) is using segmented models:


You model your character with all different armor and cloth-pieces as sub-meshes (in blender, you would use vertex-groups for that). Then you just toggle the visibility (or attach) the needed parts and hide (or detach) the unneeded parts. All the parts will share the same skeleton.


You can also achieve a lot of different looks by just swapping the texture.


grammaticality - Is the word "have" in this sentence grammatically wrong? & When is it fine to be "redundant"?



Sometimes, when we ran out of time and did not have space for any further activity, he used to tell us that should anyone need any assistance or have any query, accompany me to my office.




Microsoft Word is telling me that I should use "has" instead of "have", and this casts some doubts that I need to get rid of. So, what is right? And what is wrong?


I think that novelists and storytellers use "redundancy" in a good way to adorn their works of literature and to interestingly add more elaboration, which aids the reader with depicting the plot in his mind, or to show their range; however, when I attempt to apply that positive aspect of redundancy ("when we ran out of time and did not have space for any further activity,"), everyone tells me that I am being tautological. Could anyone explain how and when I can sound positively redundant? (I also believe that, somewhere in English, there is absolutely a more fitting word for that "positive redundancy" thing)




tense - Past perfect / past indefinite


My books says the following sentence is error free.



This is the boy who I think had won the gold medal in the dance competition.




But Is it really error free? We use past perfect when there are two actions but in the given sentence there is only one action i.e. he won the gold medal so shouldn't this be the correct sentence This is the boy who I think won the gold medal in the dance competition. ?



Answer



Past perfect doesn't necessarily have to refer to two actions, but rather can compare to an action (in the past) which is completed prior to some other (past) point in time which can be specified but also which can be implied. See:



past per·fect /ˈˌpast ˈpÉ™rfÉ™kt/ adjective 1. (of a tense) denoting an action completed prior to some past point of time specified or implied, formed in English by had and the past participle, as in he had gone by then. - Google



Look at the example sentence above, "he had gone by then." - there is no specific other action in that sentence, but clearly one is implied.


That is why the sentence you give may be correct, because in context of surrounding sentences there may be another past event which is implied in this sentence, and the reason for using the past perfect form is to show that these events are being compared.


Tuesday, July 24, 2018

definite article - "the legs of the table" vs "the table's legs"


I got this quiz:



The carpenter repaired __ .


(A) the table's legs



(B) table's legs


(C) legs of the table


(D) the legs of the table



...which says the right answer is D. But I think A is also acceptable.


What do you think? Thanks.


Related but not a duplicate: Which is correct, “The carpenter repaired the legs of the tables” or “The carpenter repaired legs of the tables”


If I add an option E: the table legs, is E also acceptable?



Answer



At first glance, I think most native speakers would agree with you, and say that both A and D are pretty much interchangeable. However, books like yours generally have a reason for making a distinction like this.



In this case, I think I've found it. From the Capital Community College's web page on possessives, we find:



Many writers consider it bad form to use apostrophe -s possessives with pieces of furniture and buildings or inanimate objects in general. Instead of "the desk's edge" (according to many authorities), we should write "the edge of the desk" and instead of "the hotel's windows" we should write "the windows of the hotel." In fact, we would probably avoid the possessive altogether and use the noun as an attributive: "the hotel windows." This rule (if, in fact, it is one) is no longer universally endorsed.



My guess is that your textbook is either somewhat dated, and was originally printed when this "rule" was more widely applied, or else the authors thought it would be worth making this distinction even if the rule is no longer universal.


That's likely why D is considered a better answer than A. I'm curious, though: Do the directions for this set of problems say to choose the "correct answer", or say to choose the "best answer"? Sometimes two answers can be correct, but one can still be justifiably preferred over the other.


Of course, in cases like this, textbooks would be much more helpful if the reasoning was listed in the answer key, instead of just telling readers that the answer is D without saying why.


Getting back to your question, you said:



I think A is also acceptable.




and I lean toward agreeing with you. But I think your book is also correct in that D is probably the "best" option of the four that are available, even if many native speakers sometimes ignore the rule about possessives and furniture. And you are definitely right about your Option E; in fact, the website even suggests this might be the best way to write it: The carpenter repaired the table legs. But that wasn't an option in the question.


java - Using random numbers with a bias



I appear to be awful at describing the question so I'll try and describe the problem.


I want to add a random amount of heads to my creatures but I want to be able to determine several things. a) The minimum number of possible heads b) The maximum number of possible heads c) The probability of the number being high/low within the above values.


So i could add heads like so: addHeads(5, 10, 0.5); // should produce creatures with "around" 7.5 heads but they could have anywhere from 5 to 10.


So random number generation isn't the problem, but controlling and actually using them in a game is. :D



Answer



One way to do it is to apply a power function. Start with a random number in [0, 1] and then raise it to the power of some positive number. Powers < 1 will bias upward, i.e. the numbers will be more likely to be higher than lower within [0, 1], and powers > 1 will bias downward. Then use multiplication and addition to shift the range of numbers from [0, 1] to your desired range. In pseudocode:


function random(low, high, bias)
{
float r = rand01(); // random between 0 and 1
r = pow(r, bias);

return low + (high - low) * r;
}

// Examples:
random(5, 10, 1.0); // between 5 and 10, average is 7.5
random(5, 10, 0.5); // between 5 and 10, average is somewhere around 8.5
random(5, 10, 2.0); // between 5 and 10, average is somewhere around 6.5

Here's a plot of the second example:


enter image description here



On the horizontal is the initial random number in [0, 1] and on the vertical is the output. You can see that something like 75% of the initial range is mapped to values higher than 7.5, and 25% of the initial range is mapped below 7.5. So the result is that numbers generated by this function are more likely to be higher.


unity - Creating a Robust Item System


My aim is to create a modular / as generic as possible item system which could handle things like:



  • Upgradeable Items (+6 Katana)

  • Stat Modifiers(+15 dexterity)

  • Item Modifiers(%X chance to do Y damage, chance to freeze)

  • Rechargeable Items(Magic staff with 30 usages)

  • Set Items(Equip 4 piece of X set to activate Y feature)

  • Rarity(common, unique, legendary)

  • Disenchantable(breaks into some crafting materials)


  • Craftable(can be crafted with certain materials)

  • Consumable(5min %X attack power, heal +15 hp)


*I was able to solve features that are bold in following setup.


Now I tried to add many options us to reflect what I have in mind. I don't plan to add all of these features necessary, but I would like to be able to implement them as I see fit. These are also should be compatible with inventory system and serialization of data.


I am planning to not use inheritance at all but rather an entity-component / data driven approach. Initially I thought of a system that has:



  • BaseStat: a generic class that holds stats on-the-go(can be used for items and character stats too)

  • Item: a class that holds data such as list of, name, itemtype and things that are related to ui, actionName, description etc.

  • IWeapon: interface for weapon. Every weapon will have its own class with IWeapon implemented in. This will have Attack and a reference to character stats. When weapon is equipped, it's data(Item class' stat) will be injected into character stat(whatever BaseStat it has, it will be added to character class as a Stat bonus) So for example, we want to produce a sword(thinking to produce item classes with json) so sword will add 5 attack to character stats. So we have a BaseStat as ("Attack", 5)(we can use enum too). This stat will be added to character's "Attack" stat as a BonusStat(which would be a different class) upon equipping it. So a class named Sword implements IWeapon will be created when it's Item Class is created. So we can inject character stats into this sword and when attacking, it can retrieve total Attack stat from character stat and inflict damage in Attack method.


  • BonusStat: is a way of adding stats as bonuses without touching the BaseStat.

  • IConsumable: Same logic as with IWeapon. Adding direct stat is fairly easy(+15 hp) but I'm not sure about adding temporary weapons with this setup(%x to attack for 5 min).

  • IUpgradeable: This can be implemented with this setup. I am thinking UpgradeLevel as a base stat, which is increased upon upgrading weapon. When upgraded, we can re-calculate weapon's BaseStat to match its Upgrade level.


Until this point, I can see that system is fairly good. But for other features, I think we need something else, because for example I can't implement Craftable feature into this as my BaseStat would not be able to handle this feature and this is where I got stuck. I can add all ingredients as a Stat but that would not make sense.


To make it easy for you to contribute this, here are some questions that you may help with:



  • Should I continue with this setup to implement other features? Would it be possible without inheritance?

  • Are there any way that you can think of, to implement all of these features without inheritance?

  • About Item Modifiers, how could one achieve that? Because it is very generic in it's nature.


  • What can be done to ease the process of building this kind of architecture, any recommendations ?

  • Are there any sources that I can dig that is related to this problem?

  • I really try to avoid inheritance, but do you think these would be solved / achieved with inheritance with ease while keeping it fairly maintainable?


Feel free to answer just a single question as I kept questions very wide so I can get knowledge from different aspects/people.







Following @jjimenezg93's answer, I created a very basic system in C# for testing, it works! See if you can add anything to it:


public interface IItem

{
List Components { get; set; }

void ReceiveMessage(T message);
}



public interface IAttribute
{
IItem source { get; set; }

void ReceiveMessage(T message);
}



So far, IItem and IAttribute are base interfaces. There were no need(that I can think of) to have a base interface/attribute for message, so we will directly create a test message class. Now for test classes:




public class TestItem : IItem
{
private List _components = new List();
public List Components

{
get
{
return _components;
}

set
{
_components = value;
}

}

public void ReceiveMessage(T message)
{
foreach (IAttribute attribute in Components)
{
attribute.ReceiveMessage(message);
}
}
}




public class TestAttribute : IAttribute
{
string _infoRequiredFromMessage;

public TestAttribute(IItem source)
{
_source = source;
}


private IItem _source;
public IItem source
{
get
{
return _source;
}

set

{
_source = value;
}
}

public void ReceiveMessage(T message)
{
TestMessage convertedMessage = message as TestMessage;
if (convertedMessage != null)
{

convertedMessage.Execute();
_infoRequiredFromMessage = convertedMessage._particularInformationThatNeedsToBePassed;
Debug.Log("Message passed : " + _infoRequiredFromMessage);

}
}
}



public class TestMessage

{
private string _messageString;
private int _messageInt;
public string _particularInformationThatNeedsToBePassed;
public TestMessage(string messageString, int messageInt, string particularInformationThatNeedsToBePassed)
{
_messageString = messageString;
_messageInt = messageInt;
_particularInformationThatNeedsToBePassed = particularInformationThatNeedsToBePassed;
}

//messages should not have methods, so this is here for fun and testing.
public void Execute()
{
Debug.Log("Desired Execution Method: \nThis is test message : " + _messageString + "\nThis is test int : " + _messageInt);
}
}

These are the setup needed. Now we can use the system(Following is for Unity).


public class TestManager : MonoBehaviour
{


// Use this for initialization
void Start()
{
TestItem testItem = new TestItem();
TestAttribute testAttribute = new TestAttribute(testItem);
testItem.Components.Add(testAttribute);
TestMessage testMessage = new TestMessage("my test message", 1, "VERYIMPORTANTINFO");
testItem.ReceiveMessage(testMessage);
}


}

Attach this TestManager script to a component in scene and you can see in debug that message is successfully passed.




In order to explain things: Every item in the game will implement IItem interface and every Attribute(name should not confuse you, it means item feature/system. Like Upgradeable, or disenchantable) will implement IAttribute. Then we have a method to process the message(why we need message will be explained in further example). So in context, you can attach attributes to an item and let the rest do for you. Which is very flexible, because you can add/remove attributes at ease. So a pseudo-example would be Disenchantable. We will have a class called Disenchantable(IAttribute) and it in Disenchant method, it will ask for:



  • List ingredients(when item is disenchanted, what item should be given to player) note: IItem should be extended to have ItemType, ItemID etc.

  • int resultModifier(if you implement a kind of boost the disenchant feature, you can pass an int here to increase the ingredients received when disenchanted)

  • int failureChance(if disenchant process has a failure chance)



etc.


These information will be provided by a class called DisenchantManager, it will receive the item and form this message according to item(ingredients of the item when disenchanted) and player progression(resultModifier and failureChance). In order to pass this message, we will create a DisenchantMessage class, which will act as a body for this message. So DisenchantManager will populate a DisenchantMessage and send it to the item. Item will receive the message and pass it to all of it's attached Attributes. Since Disenchantable class's ReceiveMessage method will look for a DisenchantMessage, only Disenchantable attribute will receive this message and act on it. Hope this clears things as much as it did for me :).



Answer



I think you can achieve what you want in terms of scalability and maintainability by using an Entity-Component System with basic inheritance and a messaging system. Of course, have in mind that this system is the most modular/customizable/scalable I can think of, but it will probably perform worse than your current solution.


I'll explain further:


First of all, you create an interface IItem and an interface IComponent. Any item you want to store must inherit from IItem, and any component you want to affect your items must inherit from IComponent.


IItem will have an array of components and a method for handling IMessage . This handling method simply sends any received message to all stored components. Then, the components which are interested in that given message will act accordingly, and the others will ignore it.


One message, for example, is of type damage, and it informs both the attacker and attacked, so you know how much you hit and maybe charge your fury bar based on that damage. Or the enemy's AI can decide to run if it hits you and makes less than 2HP damage. These are dumb examples but using a system similar to this I'm mentioning, you won't need to do anything more than creating a message and the appropiate handlings to add most of this kind of mechanics.


I have an implementation for an ECS with messaging here, but this is used for entities instead of items and it uses C++. Anyway, I think it can help if you take a look at component.h, entity.h and messages.h. There are a lot of things to be improved but it worked for me in that simple university work.



Hope it helps.


Simple past, Present perfect Past perfect

Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...