Thursday, August 31, 2017

networking - How can I make a peer-to-peer multiplayer game?



How can I make a p2p multiplayer game? I would like to have a server-less multiplayer game. But then, how all the clients know each other?


Why the p2p-protocol is so famous in file transfer but not in multiplayer games?



Answer



Peer to peer games generally still have a game host. Its the game host that posts the game to the master games list and accepts new connections. Whenever the game host accepts a new client to the game it notifies all existing clients about the new client so that they can ensure they connect to the new client.


The simplest way to implement p2p is with a lobby. All clients connect to the host in a lobby (or chat room). When the host is ready the player presses start and they all enter the game at the same time (commonly used in strategy games). A more complex approach is to use "drop-in drop-out" where players can join and leave mid game, however this is a lot more complex to implement in a p2p game and requires a feature called host-migration.



A good number of games use peer to peer networking, including most strategy, sports and driving titles. Just about all Xbox360 and PS3 games use p2p networking. The client-server architecture is mostly used in first person shooter or MMO games.


Client-Server is generally easier to implement as only 1 machine has not know the entire game state, the clients are basically just renderers with some prediction to make things look smooth.


When you build a p2p engine, all clients need a full state of the game world and they all are required to stay in sync.


For more details on p2p and client-server architectures I suggest you read the following article: What Every Programmer Needs To Know About Game Networking.


And if you're new to networking in general checkout the other great articles on that site. Glenn is a networking genius.


Can we use adjective after noun?


People angry with the high prices were protesting.


Can we use adjective after noun without verb to be like the example above?


Can you tell me what grammar rule is it?



Answer




As J.R. says, that construction is grammatical, and indeed required.


A bare adjective, or one modified by one or more preceding adverbs, goes in front of the noun. (I'm adding a determiner, many, to your sentence, to make the structure a little less ambiguous.)



Many angry people were protesting.
Many passionately and vociferously angry people were protesting.



But an adjective which has a complement cannot be placed in front of the noun; it must be treated as a reduced relative clause and placed after the noun.



Many angry with the high prices people were protesting.
   Many people angry with the high prices were protesting.




If the adjective is modified it will carry its modifiers with it.



Many people passionately and vociferously angry with the high prices were protesting.



The same thing is true of any ‘heavy’ modifier with an embedded complement, such as a participle phrase or an adjectival preposition phrase:



Many suffering from hunger people were protesting.
   Many people suffering from hunger were protesting.


Many from the surrounding villages people were protesting.

   Many people from the surrounding villages were protesting.





As Laure says, the clause may be bracketed with commas. This changes the meaning, however: the clause is now ‘non-restrictive’ rather ‘restrictive’: being angry with high prices no longer defines the people who were protesting, it is an additional observation about them. A non-restrictive clause can be placed in other places:



Passionately and vociferously angry with the high prices, many people were protesting.
Many people were protesting, passionately and vociferously angry with the high prices.





before an utterance marks it as ungrammatical.



A reduced relative clause is one from which the relative pronoun and any immediately following copula have been deleted as unnecessary: who were angry with the high prices.


unity - Architecture of "doodle jump" type gameplay infinite looping background


I am planning make a doodle jump type game, character jumping on platforms. A scrolling or doodle jump like background which ll appear to move when character moves upward and appear moving (just like in doodle jump) so, my thoughts are for this kind of background is




  • to take a large image i.e of 2048x2048, make 2 planes, set this texture to them, when first image ends other ll move its position on top of first, when second ends first ll move on top of second and so on.




  • Other approch could be a "Instantiate" and "Destroy" of small images having textures on them according to jumping character position ( which i doubt ll b expensive )





They are my thoughts, I want to go for best approach and would like to listen the best one. Is there any more appropriae way to achieve moveable background, I ll appreciate your suggestions. Thanks.



Answer



We are in the making of a similiar game, without the left-right scrolling. We decided to load as much texture as we can, becouse it can be real slow when our character moves up fast regardless how we try to load it(eg. AssetBundle's loadAsync). Plus, when our level loads we pre-create(Instantiate) some platforms and particles, but instead of destroy(when they gone out of vision) we will store them and reuse them(classic object pools). So my suggestion is: preload/pre-create then reuse as many thing as you can to achive fast gameplay.


How does Unity use C# as a scripting language?


To my knowledge, thus far, I have thought that C# is and has always been a compiled language. I have recently started studying Unity3d and noticed that they give C# as an option for scripting and interacting with game objects through their API (along with JavaScript and a couple of other alternatives).


How is this done? Is C# actually being executed or is this an abstraction that is being converted to a different scripting language under the covers? It seems to me that there is some sort of interpretation going on for this functionality.





idioms - Meaning of "Orange is the new black"




There is a TV show by this name. And I heard someone saying it too. I googled its meaning but the effort went in vain.




Wednesday, August 30, 2017

pbr - What is physically correct lighting all about?


I can't find anything comprehensive using Google. I'm wondering what the core concepts of physically correct lighting are, and where I could read up on it. What's physically correct lighting all about? Is Phong illumination generally physically incorrect?




Answer



This is a much bigger topic than can be covered in an answer, but briefly:


Physically-based shading means leaving behind phenomenological models, like the Phong shading model, which are simply built to "look good" subjectively without being based on physics in any real way, and moving to lighting and shading models that are derived from the laws of physics and/or from actual measurements of the real world, and rigorously obey physical constraints such as energy conservation.


For example, in many older rendering systems, shading models included separate controls for specular highlights from point lights and reflection of the environment via a cubemap. You could create a shader with the specular and the reflection set to wildly different values, even though those are both instances of the same physical process. In addition, you could set the specular to any arbitrary brightness, even if it would cause the surface to reflect more energy than it actually received.


In a physically-based system, both the point light specular and the environment reflection would be controlled by the same parameter, and the system would be set up to automatically adjust the brightness of both the specular and diffuse components to maintain overall energy conservation. Moreover you would want to set the specular brightness to a realistic value for the material you're trying to simulate, based on measurements.


Physically-based lighting or shading includes physically-based BRDFs, which are usually based on microfacet theory, and physically correct light transport, which is based on the rendering equation (although heavily approximated in the case of real-time games).


It also includes the necessary changes in the art process to make use of these features. Switching to a physically-based system can cause some upsets for artists. First of all it requires full HDR lighting with a realistic level of brightness for light sources, the sky, etc. and this can take some getting used to for the lighting artists. It also requires texture/material artists to do some things differently (particularly for specular), and they can be frustrated by the apparent loss of control (e.g. locking together the specular highlight and environment reflection as mentioned above; artists will complain about this). They will need some time and guidance to adapt to the physically-based system.


On the plus side, once artists have adapted and gained trust in the physically-based system, they usually end up liking it better, because there are fewer parameters overall (less work for them to tweak). Also, materials created in one lighting environment generally look fine in other lighting environments too. This is unlike more ad-hoc models, where a set of material parameters might look good during daytime, but it comes out ridiculously glowy at night, or something like that.


Here are some resources to look at for physically-based lighting in games:




And of course, I would be remiss if I didn't mention Physically-Based Rendering by Pharr and Humphreys, an amazing reference on this whole subject and well worth your time, although it focuses on offline rather than real-time rendering.


word request - Term for someone who cannot keep something to themselves


We have them everywhere. And I'm looking for the term to refer to them.


These are the people who cannot keep any matter to themselves. Irrespective of the degree of seriousness of the matter, they'll simply spit it out in front of others. In short, they are not eligible to say the idiom I'll carry this to my grave!


In my school/college days, we had a very informal term for them. We used to call such people 'the BBC'! That's because if you have told anything to them, they'll spread the word for sure.



There could be more than one word. I'll be happy to have the closest term for it though.



Answer




  • a big mouth - if you have a big mouth, you talk too much, especially about things that should be secret

  • loose lip - The practice or characteristic of being overly talkative, especially with respect to inadvertently revealing information which is private or confidential.


graphics - How to program a cutting tool for 3D model in game


I'm looking for a resource to figure out how to program a function to cut a 3d model in game.


Example: Enemy/NPC is sliced into 2 pieces with a sword. His body is not hollow, you can see bloody texture where normally a 'polygon hole' would be.


The first step is to actually 'cut/slice' the model, then add in polygons to fill the hole in the model. I know this can be done in 3D modelling software, but I'm not sure how to go about doing this in a game, code-wise. I do not wish to use 'pre cut-up" models, the code will determine where the cut is.


Any pointers in the right direction would be greatly appreciated.




mathematics - How can I approximate an "opening fan"-transformation?


Is there a way to—with matrices—transform something as if opening the image on a Japanese folding fan?


poem on an open fan
Image from Wikimedia Commons.


I'm at a loss of what to call it, so pointers towards avenues of research would be greatly appreciated. If it isn't possible with matrices, alternate methods would also be good.


If that is not possible either, a way to approximate the perspective tool in most graphics manipulation programs would be nice also.




2d - How to calculate corner positions/marks of a rotated/tilted rectangle?


I've got two elements, a 2D point and a rectangular area. The point represents the middle of that area. I also know the width and height of that area. And the area is tilted by 40° relative to the grid.


Now I'd like to calculate the absolute positions of each corner mark of that tilted area only using this data. Is that possible?



Answer



X = x*cos(θ) - y*sin(θ)
Y = x*sin(θ) + y*cos(θ)


This will give you the location of a point rotated θ degrees around the origin. Since the corners of the square are rotated around the center of the square and not the origin, a couple of steps need to be added to be able to use this formula. First you need to set the point relative to the origin. Then you can use the rotation formula. After the rotation you need to move it back relative to the center of the square.


// cx, cy - center of square coordinates
// x, y - coordinates of a corner point of the square
// theta is the angle of rotation

// translate point to origin
float tempX = x - cx;
float tempY = y - cy;


// now apply rotation
float rotatedX = tempX*cos(theta) - tempY*sin(theta);
float rotatedY = tempX*sin(theta) + tempY*cos(theta);

// translate back
x = rotatedX + cx;
y = rotatedY + cy;

Apply this to all 4 corners and you are done!


phrase usage - What does "For next to nothing" mean?


I will start with the example I know to make it clear.


In a TV show this conversation happened:



Guy1: This car is crap. I'll buy it for next to nothing?


Guy2: How next to?



I guess the fans of this show will figure out what it is :). Anyway, what does "for next to something" mean? And how is it possible to ask "How next to?".


I think answering the first will lead to the second.



Answer




"Next to" means "almost" in this case.


Imagine a scale of possible prices, from zero to infinity. What sits immediately next to nothing (zero) on that scale? "Almost nothing."


"How next to?" is a jocose question whose purpose is to determine the degree of "almostness": how close to zero, exactly, is the price? Does "almost nothing" mean a dime, a quarter, or ten dollars?


Closely related is the idiomatic phrase "next door to":



STELLA:
A rhinestone tiara she wore to a costume ball.


STANLEY: What's rhinestone?


STELLA:
Next door to glass.




In this excerpt from A Streetcar Named Desire, a play by Tennessee Williams, Stella explains to Stanley (who thought that he was looking at something valuable) that the tiara is really, really cheap. "Next door to glass" means "Those are not real diamonds. They're fake. They're made of rhinestone. How expensive is rhinestone? Barely more expensive than glass."


shaders - Is multitexturing really just "using more than one texture"?


This might seem stupid, but it bugs me. From what I understand, multitexturing is just using more than 1 texture per shader (usually to blend them somehow). So instead of creating 1 texture, I create 2 or more which is... pretty obvious!


Why is there a special term for that? Is there more to it than just "using more than 1 texture"? Am I missing something?



Answer




The special term has to do with the evolution of graphics cards and real time graphics APIs rather than being so special. For instance image processing and offline graphics rendering had this feature long before it was available in real time graphics APIs and consumer level hardware.


Since OpenGL 1.1 (fixed functionality) multitexturing was introduced as a new state to the API. Instead of only binding one texture, it gave you the ability to bind multiple textures; one for each texture units and then blend between them using one of the texture environment parameters, blend, modulate..etc1


Then texture combiners were introduced, it gave the ability combine a set of textures using more flexible operations, by giving you a way to pass parameters to a fixed set of operations (think of it as shaders but with fixed set of params), this was more flexible than the original "combiners".


With the introduction of shaders, the previous combiners became redundant until officially deprecated. Finally It became trivial (as hardware and accordingly APIs evolved) to pass a texture sampler to the shader and let the GPU execute arbitrary operations (your shader) on your textures as much more flexiblel to what was before.


The point here; even if it became trivial, the term survived as a generic term for any operation that describes sampling and convolution of multiple textures.


1 I am using my memory to recall OpenGL fixed functionality correct me if I missed sth.


Tuesday, August 29, 2017

xna - Best pathfinding algorithm for a tower-defense game?



What do you suggest would be the best algorithm for a tower-defense game? It's a 2D based tile game, where there is walls and towers blocking the way, between spawnpoints and their destination points.


Constantly, as the player places a new tower to block the way, or to help shoot spawning units before they reach their destination, a new path for the affected spawnpoint's path will have to be recalculated, and the units must be re-routed to that new path.


Therefore, I need performance.



I tried the A* algorithm, but everytime the player places a new tower, and path has to be recalculated, the existing units who haven't gone past the tower yet, get lost, and stand still, since they were a part of the old path that has now lost its pathing information.



Answer



A* should be plenty fast enough. Each time a tower is placed you should calculate a new path for each spawn point, and assign that path to each unit that is spawned there. You should also calculate a new path for the units "in the field". Units in the field can have their paths calculated as the shortest path to get back on track, as in a path to the new path. Or the units can have their path calculated from their current position to the destination.


You can likely save calculations by grouping units in the field and calculate a common path for them all. For example if you have a group of units in tile (4,7), they can all use the same path, so you just have to calculate it once.


Additionally (depending on what your rules are) you should consider doing these calculations as a check before the tower is placed. This will disallow the player from placing towers that block all paths. Or as some tower defense games work, if the play blocks all paths, the units just ignore towers when path finding.


syntax - "I want to understand what my options are" or "I want to understand what are my options"?


Which way is correct: "I want to understand what my options are" or "I want to understand what are my options" and why ?


Since my English still needs tons of work, this baffled me for a long time, and each time I did a quick search, but I could not find the answer. Thus, I decided to ask it here.



Answer



I used to make (and still occasionaly make) mistakes in sentences of this sort.



The correct sentence is



I want to understand what my options are.



There's a nicely-named linguistic term: the penthouse principle.


Quoting Wikipedia,



The penthouse principle: The rules are different if you live in the penthouse. (the "penthouse" here is a clause attached to the matrix clause)



The correct word order for a question:




What are my options? (the positions of the auxiliary verb are and the subject options are inverted)



But when you put this clause in a "penthouse", atop a main clause, you do not invert the positions of the subject and the auxiliary verb:



I want to understand [what my options are]. (the positions are not inverted: the subject options comes first, then the auxiliary are)





Note that AdamV, being a native English speaker, says that




I want to understand what are my options. (incorrect)



feels like "two sentences clumped together". That's because we have subject-auxiliary inversion in "what are my options", and this is proper only when this clause is a main clause, not when it is a subordinate clause, a "penthouse atop a skyscraper".


Naturally, a native speaker would feel that "what are my options" should be a standalone clause.


2d - Should I be using spritesheets, because of or despite of my vast number of images?


I am developing a 2D game, and I have a lot of sprites. I used 3D animations and models to render into 2D, to give them that "Fallout" or "Diablo" look to them. It is also easier than drawing by hand, lol.


I have already had to cut the framerate down to 15fps, which was the lowest I could lower without making them have a choppy look to them. However, it was sad due to how incredibly smooth 24 frames looked.


There are two reasons I did this:


1) Cut down on HDD space. The fewer the images, the smaller my total game will be.


2) Cut down on RAM consumption. The fewer images to load, the more likely I am to avoid issues bloating my RAM limitation.


However, if there was a way to compress the images in both HDD space and RAM, I would do so. I have tested it before, and most do not receive any change in quality when giving from RGBA8888 to RGBA5555 and only a little hit when converting to RGBA4444 in my TexturePacker program. I do not do this currently, because SFML seems to use the same amount of memory regardless of which type of .PNG image it is. I looked into researching how to load it differently, but failed to find anything on the subject.


I have read a lot about how to handle 2D video games. The consensus is overwhelming: Pack your Sprites into a Bigger Texture for great performance! So I pack my tiny sprites into a much larger spritesheet using TexturePacker.


However, I plan to have 10-15 animations per character, 5 directions to move, and 15-40 frames per animation (probably an average of 24). With 15 animations, 5 directions, and an average of 24 frames per animation; That is 1800 individual frames per character. If packed in a sprite sheet, that is only 75 images instead. (One sprite sheet per Animation, per Direction. 15 * 5)


For the one huge boss character in the game, I cannot use a spritesheet and have to program a way to simply load in one image at a time. I do not know if I can do this for performance yet.



For the characters, I already pack them in a spritesheet. For a single character walking about, this seems to work most of the time, although sometimes it stalls. However, I attribute that to my ill conceived code that swaps out textures instead of preloading all textures for that character.


If I were to preload the textures, it makes sense for sprite sheets. I would only imagine it's a bad idea to preload 1800 tiny images for each character.


However, I imagine streaming them into and out of memory one at a time would be extremely fast, so I would only need to have a single image in memory at one time. Wouldn't this mean that at any given moment I would only have each character consume a few KB instead of 45+MB?


I imagine this would kill my performance, as streaming would need to be incredibly fast (15 images going into and out of memory and rendering, per second) and although the images would be very small- it might be a better idea to load character spritesheets into memory instead. But I will have to code a single-image stream-like render system for my larger boss character anyway.


I have been experimenting, but it is not a simple process. Especially given the fact I am working on other parts of the game engine that do not deal with graphics right now.



Answer



We have a similar case with our RTS Remake. All units and houses are sprites. We have 18 000 sprites for units and houses and terrain, plus another ~6 000 for team colors (applied as masks). Long-stretched we also have some ~30 000 characters used in fonts.


So the main reason behind atlases are:



  • less wasted RAM (in older days when you upload NPOT to GPU it stretched/padded it to POT, I read it's still the same with iOS and some frameworks. You better check on range of hardware you target)


  • less texture switches

  • faster loading of everything in fewer bigger chunks


What did not worked for us:



  • paletted textures. The feature existed only in OpenGL 1.x 2.x and even then was mostly dropped by GPU makers. However if you aim at OpenGL+Shaders you can do that in shaders code yourself just fine!

  • NPOT textures, we had issues with wrong borders and blurred sprites, which is unacceptable in pixel art. RAM usage was much higher too.


Now we have everything packed in several dozens of 1024x1024 atlases (modern GPUs support even bigger dimensions) and that works just well eating only ~300mb of memory, which is quite fine for a PC game. Some optimizations we had:




  • add user option to use RGB5_A1 instead of RGBA8 (checkerboard shadows)

  • avoid 8bit Alpha when possible and use RGB5_A1 format

  • tightly pack sprites into atlases (see Bin Packing algorithms)

  • store and load everything in one chunk from HDD (resource files should be generated offline)

  • you might also try hardware compression formats (DXT, S3TC, etc.)


When you seriously consider moving to mobile devices you will worry about constraints. For now just get the game working and attract players! ;)


Monday, August 28, 2017

unity - How can I create a "see behind walls" effect?


Divinity: Original Sin 2 has a beautiful see-through system. When I go behind walls, a splash mask will appear, and when I move around the game, it changes. It's like a dissolve shader, and has a metaball effect.


How can I replicate this effect, creating a dynamic splash mask when players go behind walls?



You can see the desired effect in motion via this YouTube video.


Image



Answer




To make this effect, you can mask objects by using a stencil Buffer.



the stencil buffer is a general purpose buffer that allows you to store an additional 8bit integer (i.e. a value from 0-255) for each pixel drawn to the screen. Just as shaders calculate RGB values to determine the colour of pixels on the screen, and z values for the depth of those pixels drawn to the depth buffer, they can also write an arbitrary value for each of those pixels to the stencil buffer. Those stencil values can then be queried and compared by subsequent shader passes to determine how pixels should be composited on the screen.


https://docs.unity3d.com/Manual/SL-Stencil.html


https://alastaira.wordpress.com/2014/12/27/using-the-stencil-buffer-in-unity-free/


http://www.codingwithunity.com/2016/01/stencil-buffer-shader-for-special.html




image


Mask Stencil:


Stencil 
{
Ref 1 // ReferenceValue = 1
Comp NotEqual // Only render pixels whose reference value differs from the value in the buffer.
}

Wall Stencil:



Stencil
{
Ref 1 // ReferenceValue = 1
Comp Always // Comparison Function - Make the stencil test always pass.
Pass Replace // Write the reference value into the buffer.
}


use this as the mask:


Shader "Custom/SimpleMask"

{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_CutOff("CutOff", Range(0,1)) = 0
}
SubShader
{
LOD 100
Blend One OneMinusSrcAlpha

Tags { "Queue" = "Geometry-1" } // Write to the stencil buffer before drawing any geometry to the screen
ColorMask 0 // Don't write to any colour channels
ZWrite Off // Don't write to the Depth buffer
// Write the value 1 to the stencil buffer
Stencil
{
Ref 1
Comp Always
Pass Replace
}


Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag

#include "UnityCG.cginc"

struct appdata

{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};

struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};


sampler2D _MainTex;
float4 _MainTex_ST;
float _CutOff;

v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);


o.uv = TRANSFORM_TEX(v.uv, _MainTex);

return o;
}

fixed4 frag (v2f i) : SV_Target
{
fixed4 col = tex2D(_MainTex, i.uv);
float dissolve = step(col, _CutOff);
clip(_CutOff-dissolve);

return float4(1,1,1,1)*dissolve;
}
ENDCG
}
}
}

use this as the wall:


Shader "Custom/Wall" {
Properties {

_Color ("Color", Color) = (1,1,1,1)
_MainTex ("Albedo (RGB)", 2D) = "white" {}
_Glossiness ("Smoothness", Range(0,1)) = 0.5
_Metallic ("Metallic", Range(0,1)) = 0.0
}
SubShader {
Blend SrcAlpha OneMinusSrcAlpha
Tags { "RenderType"="Opaque" }
LOD 200


Stencil {
Ref 1
Comp NotEqual
}

CGPROGRAM
// Physically based Standard lighting model, and enable shadows on all light types
#pragma surface surf Standard fullforwardshadows

// Use shader model 3.0 target, to get nicer looking lighting

#pragma target 3.0

sampler2D _MainTex;

struct Input {
float2 uv_MainTex;
};

half _Glossiness;
half _Metallic;

fixed4 _Color;

void surf (Input IN, inout SurfaceOutputStandard o) {
// Albedo comes from a texture tinted by color
fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
o.Albedo = c.rgb;
// Metallic and smoothness come from slider variables
o.Metallic = _Metallic;
o.Smoothness = _Glossiness;
o.Alpha = c.a;

}
ENDCG
}
FallBack "Diffuse"
}




If you want to have a procedural texture, you need some noises.


image you can see this shader in ShaderToy.



To make this effect, instead of using UV Coordinates, use Polar Coordinates then set it to the noise texture.



Uvs are typically layed out in a grid like fashion, like pixels n a screen (X = width, Y = height). Polar coordinates, however, use the x and y a bit differently. One determines how far away from the center of the circle it is and the other determies the degrees, from a 0-1 range, depending on what you need.



1600px-sf_radialuvs


Shader "Smkgames/NoisyMask" {
Properties {
_MainTex ("MainTex", 2D) = "white" {}
_Thickness ("Thickness", Range(0, 1)) = 0.25
_NoiseRadius ("Noise Radius", Range(0, 1)) = 1

_CircleRadius("Circle Radius", Range(0, 1)) = 0.5
_Speed("Speed", Float) = 0.5
}
SubShader {
Tags {"Queue"="Transparent" "IgnoreProjector"="true" "RenderType"="Transparent"}
ZWrite Off
Blend SrcAlpha OneMinusSrcAlpha
Cull Off

Pass {

CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
#pragma target 3.0
uniform sampler2D _MainTex; uniform float4 _MainTex_ST;
uniform float _Thickness,_NoiseRadius,_CircleRadius,_Speed;

struct VertexInput {
float4 vertex : POSITION;

float2 texcoord0 : TEXCOORD0;
};
struct VertexOutput {
float4 pos : SV_POSITION;
float2 uv0 : TEXCOORD0;
float4 posWorld : TEXCOORD1;

};
VertexOutput vert (VertexInput v) {
VertexOutput o = (VertexOutput)0;

o.uv0 = v.texcoord0;

o.pos = UnityObjectToClipPos(v.vertex);
o.posWorld = mul(unity_ObjectToWorld, v.vertex);
return o;
}
float4 frag(VertexOutput i, float facing : VFACE) : COLOR {

float2 uv = (i.uv0*2.0+-1.0); // Remapping uv from [0,1] to [-1,1]
float circleMask = step(length(uv),_NoiseRadius); // Making circle by LENGTH of the vector from the pixel to the center

float circleMiddle = step(length(uv),_CircleRadius); // Making circle by LENGTH of the vector from the pixel to the center
float2 polaruv = float2(length(uv),((atan2(uv.g,uv.r)/6.283185)+0.5)); // Making Polar
polaruv += _Time.y*_Speed/10;
float4 _MainTex_var = tex2D(_MainTex,TRANSFORM_TEX(polaruv, _MainTex)); // BackGround Noise
float Noise = (circleMask*step(_MainTex_var.r,_Thickness)); // Masking Background Noise
float3 finalColor = float3(Noise,Noise,Noise);
return fixed4(finalColor+circleMiddle,(finalColor+circleMiddle).r);
}
ENDCG
}

}
FallBack "Diffuse"
}

another solution is using worley noise:


2018-01-05_8-16-16


you can see this shader in ShaderToy





then I add the metaball effect from This article: img






there is more...


If you want to rotate your mask, to look to your camera, you can use Bill board:


 output.pos = mul(UNITY_MATRIX_P, 
mul(UNITY_MATRIX_MV, float4(0.0, 0.0, 0.0, 1.0))
+ float4(input.vertex.x, input.vertex.y, 0.0, 0.0));

this is the mask with Bill boarding:


Shader "Custom/Mask/SimpleMaskBillBoard"

{
Properties
{
_MainTex ("Texture", 2D) = "white" {}
_CutOff("CutOff", Range(0,1)) = 0
_Radius("Radius", Range(0,1)) = 0.2
_Speed("speed", Float) = 1
_ScaleX ("Scale X", Float) = 1.0
_ScaleY ("Scale Y", Float) = 1.0
}

SubShader
{
LOD 100
Blend One OneMinusSrcAlpha
Tags { "Queue" = "Geometry-1" } // Write to the stencil buffer before drawing any geometry to the screen
ColorMask 0 // Don't write to any colour channels
ZWrite Off // Don't write to the Depth buffer

// Write the value 1 to the stencil buffer
Stencil

{
Ref 1
Comp Always
Pass Replace
}

Pass
{
CGPROGRAM
#pragma vertex vert

#pragma fragment frag

#include "UnityCG.cginc"

struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};


struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};

sampler2D _MainTex;
float4 _MainTex_ST;
float _CutOff;
float _Speed;

float _Radius;
float _ScaleX,_ScaleY;

v2f vert (appdata v)
{
v2f o;
o.vertex = mul(UNITY_MATRIX_P,
mul(UNITY_MATRIX_MV, float4(0.0, 0.0, 0.0, 1.0))
+ float4(v.vertex.x, v.vertex.y, 0.0, 0.0)
* float4(_ScaleX, _ScaleY, 1.0, 1.0));


o.uv = TRANSFORM_TEX(v.uv, _MainTex);
return o;
}

fixed4 frag (v2f i) : SV_Target
{
fixed4 col = tex2D(_MainTex, i.uv);
float dissolve = step(col, _CutOff);
clip(_CutOff-dissolve);

return dissolve;
}
ENDCG
}
}
}




2018-01-04_20-18-39



source is Available:https://github.com/smkplus/Divinity-Origin-Sin-2





I found a good tutorial that Implemented this effect by Dissolving the world:


Image1


Dissolving the world Part 1


Dissolving the world Part 2


Image


Shader "Custom/DissolveBasedOnViewDistance" {
Properties{

_MainTex("Albedo (RGB)", 2D) = "white" {}
_Center("Dissolve Center", Vector) = (0,0,0,0)
_Interpolation("Dissolve Interpolation", Range(0,5)) = 0.8
_DissTexture("Dissolve Texture", 2D) = "white" {}
}

SubShader{
Tags { "RenderType" = "Opaque" }
LOD 200



CGPROGRAM

#pragma surface surf Standard vertex:vert addshadow

#pragma target 3.0

struct Input {
float2 uv_MainTex;
float2 uv_DissTexture;

float3 worldPos;
float viewDist;
};



sampler2D _MainTex;
sampler2D _DissTexture;
half _Interpolation;
float4 _Center;



// Computes world space view direction
// inline float3 WorldSpaceViewDir( in float4 v )
// {
// return _WorldSpaceCameraPos.xyz - mul(_Object2World, v).xyz;
// }


void vert(inout appdata_full v,out Input o){

UNITY_INITIALIZE_OUTPUT(Input,o);

half3 viewDirW = WorldSpaceViewDir(v.vertex);
o.viewDist = length(viewDirW);

}

void surf(Input IN, inout SurfaceOutputStandard o) {



float l = length(_Center - IN.worldPos.xyz);

clip(saturate(IN.viewDist - l + (tex2D(_DissTexture, IN.uv_DissTexture) * _Interpolation * saturate(IN.viewDist))) - 0.5);

o.Albedo = tex2D(_MainTex,IN.uv_MainTex);
}
ENDCG
}
Fallback "Diffuse"
}




Another stencil tutorial:


Left4Dead


Stencil Tutorial


"to find" or "finding" after the verb "help"


I want to say this sentence:



Could you help me to find a financial support please?




Would it be more correct to say to find or to say finding? Is there a general rule to determine whether to verb or verb+ing is more correct? Is there a better way to construct that sentence?



Answer



You can say:



  1. Could you help me to find financial support, please?

  2. Could you help me find financial support, please?

  3. Could you help me in finding financial support, please?


The second sentence is normally used in informal contexts, or when speaking. The OALD has the following note about using "help somebody to":




In verb patterns with a to infinitive, the ‘to’ is often left out, especially in informal or spoken English.



physics - How does one avoid the "staircase effect" in pixel art motion?


I am rendering sprites at exact pixel coordinates to avoid the blurring effect caused by antialiasing (the sprites are pixel-art and would look awful if filtered). However, since the movement of the objects involves variable velocity, gravity, and physical interactions, the trajectory is computed with subpixel precision.


At large enough screenspace velocities (vΔt larger than 2 or 3 pixels) this works very well. However, when velocity is small, a noticeable staircase effect can appear, especially along diagonal lines. This is no longer a problem at very slow screenspace velocities (v << 1 pixel per second) so I am only looking for a solution for intermediate velocity values.


On the left is the plotted trajectory for a large velocity, obtained by simple rounding of the object coordinates. In the middle you can see what happens when velocity becomes smaller, and the staircase effect I am talking about. On the right, the locus of the trajectory I would like to get.


pixel coordinates for object trajectory


I am interested in algorithm ideas to filter the trajectory in order to minimise the aliasing, while retaining the original behaviour at large and small velocities. I have access to Δt, instant position and velocity, as well as an arbitrary number of previous values, but since it is a realtime simulation, I do not know about future values (though if necessary, an estimation could be extrapolated under certain assumptions). Note that because of the physics simulation, sudden direction changes can also happen.




Answer



Here's a quick outline, off the top of my head, of an algorithm that ought to work reasonably well.



  1. First, calculate the direction the object is moving, and check whether it's closer to horizontal or vertical.

  2. If the direction is closer to vertical (horizontal), adjust the position of the object along the direction vector to the center of the nearest pixel row (column).

  3. Round the position to the center of the nearest pixel.


In pseudocode:


if ( abs(velocity.x) > abs(velocity.y) ) {
x = round(position.x);

y = round(position.y + (x - position.x) * velocity.y / velocity.x);
} else {
y = round(position.y);
x = round(position.x + (y - position.y) * velocity.x / velocity.y);
}

Edit: Yep, tested, works quite nicely.


phrase usage - What does 'the very next day' mean?


In the song, Last Christmas, I heard the phrase "But the very next day." I'm not sure what it was supposed to mean, but from context I guess it's the day after Christmas


Is it grammatically correct to say "very next"? Something is next or is not next. Can something be 'more next' that something else? Can something be 'very next' or just 'a little next'?




Answer



It is grammatically correct to say "very next". Very next day means the day after a certain event happened or happens. It means same as the next day but with an emphasis (to denote the short time period) and is used only in time-sensitive contexts and not every time one wants to refer the next day. For ex -



John was not able to go to school that day but the very next day he recovered and went to school all fine and dandy.


Instead of returning to London three days later, as he said he would, he came back the very next day.



adverbs - differentiating between still and yet


UPDATED:


Do you still live in the same place or have you moved? (How could we understand whether the time has elapsed?)


Are the following correct?


Do you live in the same place yet or have you moved?


Don't you live in the same place yet or have you moved?



Answer



I'd suggest looking at this link on yet vs still.


To sum up, still is used to imply a continuing action in affirmative sentences. So, basically you'd use still as follows: "I'm still in the same house."



If you want to use still in a sentence with not, it's important to place still in the correct position before or after not. To borrow an example from the link:


"I do not still have the picture"

I don't have the picture anymore, though I had it earlier. Note the not is before still.


"I still do not have the picture"

I am waiting for the picture to arrive. The not is after still.


For yet, it's not usually used in affirmative sentences and is mainly used with actions that aren't yet over or finished. Like -


"I haven't yet finished moving into my new place" - I'm in the process, but it's not done.


It's common to find yet with not, as shown above (have not).



It's rare to find yet in an affirmative sentence. It can be used in questions as follows -


"Has he found out about the place yet?"

to ask if he has found out about the new house/place in the time that has elapsed. You wouldn't use still here.


Think of still as inertia of action and yet as inertia of inaction.


Sunday, August 27, 2017

Is ruby a suitable language for game development?




I want to move into some game development, but the only language I know really well is Ruby. Most of what I have read seems to point towards lower level languages like C++ for game development, or languages for specific frameworks like C# for using XNA. Does anyone have any experience using a language like ruby for game development? If so, would you advise for or against it?



Answer



I have no preference towards Ruby (or Python), I'm a Java person myself. But UnknownDevice's answer about how Ruby is somehow "not really for games" and Python is, frustrated me. I do hope he will clarify.


I know Pygame exists and has been around, and I recognize that Python has a larger userbase than Ruby. But to be honest, neither of them seems like a language "for games". Neither does Java, and that's my game programming language of choice. (and when I say "of choice", I do mean by choice, not because it's what's taught in school or because it's something I "know"). And really, what is a language "for games"? Well, speed is a factor, and obviously it must have libraries for graphics and other game systems (audio, input, etc).


As far as speed goes, it seems to be a tossup between Ruby and Python. Do some searches and you'll quickly find benchmarks and arguments for both sides of the spectrum, and various configurations which put one or the other ahead. Python with something called "Psycho" seems a popular speed demon compared to Ruby, yet regular Python seems to be a bit slower than Ruby. In the end, if you're choosing such a high-level language you're obviously not concerned with native speeds anyway; go with the language you know best. And obviously you know Ruby best, so I encourage it!


The other factor is whether the technology is there to create games; whether it can support drawing to the screen and collecting input and playing audio. Ruby can do all of these. In fact there are a good number of options in this respect. There's a ruby-opengl package at RubyForge which will give OpenGL support to Ruby (or it might be included by default?). Alternatively, Chingu provides "lightning fast OpenGL accelerated 2D graphics!" according to its homepage; it builds extra features on top of Gosu, which you could choose to use if Chingu is too much for you. Or for 3D graphics, if you don't want to use ruby-opengl, try G3DRuby, "a very clean set of wrapper classes for many of the more advanced OpenGL features". There's even Rubygame, which I can't find much information on but it claims to be "a cross-platform multimedia library" and given the name, must have emphasis on game development. If you are familiar with the popular SDL library for C++, there's Ruby/SDL or RUDL, both of which are Ruby wrappers of SDL. Or if you prefer the newer, more object-oriented SFML, it is also available for Ruby!


There is no reason that Ruby should be less of a game programming language than Python; if there is one, I'd really like to hear it so I can argue against it. If you feel most comfortable programming in Ruby, and you are aware of the pros and cons compared to other popular languages, then by all means you can certainly develop games in Ruby!


idioms - 'What a big cheeks'? or 'What big cheeks'?


My brother just got his wisdom teeth extracted, and his cheeks are really big now. So cute.


I want to express my feeling by saying "What a big cheeks!" But I feel weird about this sentence: Because I am saying "cheeks" I should not use "a", but if I say "What big cheeks!" it feels even more weird…


Any suggestions?



Answer





What big cheeks!



This is correct.


Your reasoning is also correct: cheeks is plural so you don't use the indefinite article a.


c++ - How can I optimize a collision engine where order is significant and collision is conditional based on object group?


If this is your first time on this question, I suggest reading the pre-update part below first, then this part. Here's a synthesis of the problem, though:



Basically, I have a collision detection and resolution engine with a grid spatial partitioning system where order-of-collision and collision groups matter. One body at a time must move, then detect collision, then resolve collisions. If I move all the bodies at once, then generate possible collision pairs, it is obviously faster, but resolution breaks because order-of-collision isn't respected. If I move one body a time, I'm forced to get bodies to check collisions, and it becomes a n^2 problem. Put groups in the mix, and you can imagine why it gets very slow very fast with a lot of bodies.






Update: I've worked really hard on this, but couldn't manage to optimize anything.


I successfully implemented the "painting" described by Will and changed groups into bitsets, but it is a very very minor speedup.


I also discovered a big issue: my engine is order-of-collision dependent.


I tried an implementation of unique collision pair generation, which definitely speed up everything by a lot, but broke the order-of-collision.


Let me explain:




  • in my original design (not generating pairs), this happens:




    1. a single body moves

    2. after it has moved, it refreshes its cells and gets the bodies it collides against

    3. if it overlaps a body it needs to resolve against, resolve the collision


    this means that if a body moves, and hits a wall (or any other body), only the body that has moved will resolve its collision and the other body will be unaffected.


    This is the behavior I desire.


    I understand it's not common for physics engines, but it has a lot of advantages for retro-style games.





  • in the usual grid design (generating unique pairs), this happens:



    1. all bodies move

    2. after all bodies have moved, refresh all cells

    3. generate unique collision pairs

    4. for each pair, handle collision detection and resolution


    in this case a simultaneous move could have resulted in two bodies overlapping, and they will resolve at the same time - this effectively makes the bodies "push one another around", and breaks collision stability with multiple bodies


    This behavior is common for physics engines, but it is not acceptable in my case.





I also found another issue, which is major (even if it's not likely to happen in a real-world situation):



  • consider bodies of group A, B and W

  • A collides and resolves against W and A

  • B collides and resolves against W and B

  • A does nothing against B

  • B does nothing against A


there can be a situation where a lot of A bodies and B bodies occupy the same cell - in that case, there is a lot of unnecessary iteration between bodies that mustn't react to one another (or only detect collision but not resolve them).



For 100 bodies occupying the same cell, it's 100^100 iterations! This happens because unique pairs aren't being generated - but I can't generate unique pairs, otherwise I would get a behavior I do not desire.


Is there a way to optimize this kind of collision engine?


These are the guidelines that must be respected:




  • Order of collision is extremely important!



    • Bodies must move one at a time, then check for collisions one at a time, and resolve after movement one at a time.





  • Bodies must have 3 group bitsets



    • Groups: groups the body belongs to

    • GroupsToCheck: groups the body must detect collision against

    • GroupsNoResolve: groups the body must not resolve collision against

    • There can be situations where I only want a collision to be detected but not resolved











Pre-update:


Foreword: I'm aware that optimizing this bottleneck is not a necessity - the engine is already very fast. I, however, for fun and educational purposes, would love to find a way to make the engine even faster.




I'm creating a general-purpose C++ 2D collision detection/response engine, with an emphasis on flexibility and speed.


Here's a very basic diagram of its architecture:


Basic engine architecture


Basically, the main class is World, which owns (manages memory) of a ResolverBase*, a SpatialBase* and a vector.



SpatialBase is a pure virtual class which deals with broad-phase collision detection.


ResolverBase is a pure virtual class which deals with collision resolution.


The bodies communicate to the World::SpatialBase* with SpatialInfo objects, owned by the bodies themselves.




There currenly is one spatial class: Grid : SpatialBase, which is a basic fixed 2D grid. It has it's own info class, GridInfo : SpatialInfo.


Here's how its architecture looks:


Engine architecture with grid spatial


The Grid class owns a 2D array of Cell*. The Cell class contains a collection of (not owned) Body*: a vector which contains all the bodies that are in the cell.


GridInfo objects also contain non-owning pointers to the cells the body is in.





As I previously said, the engine is based on groups.



  • Body::getGroups() returns a std::bitset of all the groups the body is part of.

  • Body::getGroupsToCheck() returns a std::bitset of all the groups the body has to check collision against.


Bodies can occupy more than a single cell. GridInfo always stores non-owning pointers to the occupied cells.




After a single body moves, collision detection happens. I assume that all bodies are axis-aligned bounding boxes.


How broad-phase collision detection works:


Part 1: spatial info update



For each Body body:





    • Top-leftmost occupied cell and bottom-rightmost occupied cells are calculated.

    • If they differ from the previous cells, body.gridInfo.cells is cleared, and filled with all the cells the body occupies (2D for loop from the top-leftmost cell to the bottom-rightmost cell).




  1. body is now guaranteed to know what cells it occupies.





Part 2: actual collision checks


For each Body body:



  • body.gridInfo.handleCollisions is called:




void GridInfo::handleCollisions(float mFrameTime)
{

static int paint{-1};
++paint;

for(const auto& c : cells)
for(const auto& b : c->getBodies())
{
if(b->paint == paint) continue;
base.handleCollision(mFrameTime, b);
b->paint = paint;
}

}



void Body::handleCollision(float mFrameTime, Body* mBody)
{
if(mBody == this || !mustCheck(*mBody) || !shape.isOverlapping(mBody->getShape())) return;

auto intersection(getMinIntersection(shape, mBody->getShape()));

onDetection({*mBody, mFrameTime, mBody->getUserData(), intersection});

mBody->onDetection({*this, mFrameTime, userData, -intersection});

if(!resolve || mustIgnoreResolution(*mBody)) return;
bodiesToResolve.push_back(mBody);
}





  • Collision is then resolved for every body in bodiesToResolve.





  • That's it.






So, I've been trying to optimize this broad-phase collision detection for quite a while now. Every time I try something else than the current architecture/setup, something doesn't go as planned or I make assumption about the simulation that later are proven to be false.


My question is: how can I optimize the broad-phase of my collision engine?


Is there some kind of magic C++ optimization that can be applied here?


Can the architecture be redesigned in order to allow for more performance?






Callgrind output for latest version: http://txtup.co/rLJgz



Answer



getBodiesToCheck()


There could be two problems with the getBodiesToCheck() function; first:


if(!contains(bodiesToCheck, b)) bodiesToCheck.push_back(b);

This part is O(n2) isn't it?


Rather than checking to see if the body is already in the list, use painting instead.



loop_count++;
if(!loop_count) { // if loop_count can wrap,
// you just need to iterate all bodies to reset it here
}
bodiesToCheck.clear();
for(const auto& q : queries)
for(const auto& b : *q)
if(b->paint != loop_count) {
bodiesToCheck.push_back(b);
b->paint = loop_count;

}
return bodiesToCheck;

You are dereferencing the pointer in the gather phase, but you'd be dereferencing it in the test phase anyway so if you have enough L1 its no big deal. You can improve performance by adding pre-fetch hints to the compiler too e.g. __builtin_prefetch, although that is easier with classic for(int i=q->length; i-->0; ) loops and such.


That's a simple tweak, but my second thought is that there could be a faster way to organise this:


You can move to using bitmaps instead, though, and avoiding the whole bodiesToCheck vector. Here's an approach:


You are using integer keys for bodies already, but then looking them up in maps and things and keeping around lists of them. You can move to a slot allocator, which is basically just an array or vector. E.g.:


class TBodyImpl {
public:
virtual ~TBodyImpl() {}

virtual void onHit(int other) {}
virtual ....
const int slot;
protected:
TBodyImpl(int slot): slot(slot_) {}
};

struct TBodyBase {
enum ... type;
...

rect_t rect;
TQuadTreeNode *quadTreeNode; // see below
TBodyImpl* imp; // often null
};

std::vector bodies; // not pointers to them

What this means is that all the stuff needed to do the actual collisions is in linear cache-friendly memory, and you only go out to the implementation-specific bit and attach it to one of these slots if there's some need to.


To track the allocations in this vector of bodies you can use an array of integers as a bitmap and use bit twiddling or __builtin_ffs etc. This is super efficient to move to slots that are currently occupied, or find an unoccupied slot in the array. You can even compact the array sometimes if it grows unreasonably large and then lots are marked deleted, by moving those on the end to fill in the gaps.


only check for each collision once



If you've checked if a collides with b, you don't need to check if b collides with a too.


It follows from using integer ids that you avoid these checks with a simple if-statement. If the id of a potential collision is less-than-or-equal to the current id being checked for, it can be skipped! This way, you'll only ever check each possible pairing once; that'll more than half the number of collision checks.


unsigned * bitmap;
int bitmap_len;
...

for(int i=0; i unsigned mask = bitmap[i];
while(mask) {
const int j = __builtin_ffs(mask);

const int slot = i*sizeof(unsigned)*8+j;
for(int neighbour: get_neighbours(slot))
if(neighbour > slot)
check_for_collision(slot,neighbour);
mask >>= j;
}

respect the order of collisions


Rather than evaluating a collision as soon as a pair is found, compute the distance to hit and store that in a binary heap. These heaps are how you typically do priority queues in path-finding, so is very useful utility code.


Mark each node with a sequence number, so you can say:




  • A10 hits B12 at 6

  • A10 hits C12 at 3


Obviously after you've gathered all the collisions, you start popping them from the priority queue, soonest first. So the first you get is A10 hits C12 at 3. You increment each object's sequence number (the 10 bit), evaluate the collision, and compute their new paths, and store their new collisions in the same queue. The new collision is A11 hits B12 at 7. The queue now has:



  • A10 hits B12 at 6

  • A11 hits B12 at 7


Then you pop from the priority queue and its A10 hits B12 at 6. But you see that A10 is stale; A is currently at 11. So you can discard this collision.



Its important not to bother trying to delete all stale collisions from the tree; removing from a heap is expensive. Simply discard them when you pop them.


the grid


You should consider using a quadtree instead. Its a very straightforward data-structure to implement. Often you see implementations that store points but I prefer to store rects, and store the element in the node that contains it. This means that to check collisions you only have to iterate over all bodies, and, for each, check it against those bodies in the same quad-tree node (using the sorting trick outlined above) and all those in parent quad-tree nodes. The quad-tree is itself the possible-collision list.


Here's a simple Quadtree:


struct Object {
Rect bounds;
Point pos;
Object * prev, * next;
QuadTreeNode * parent;
};


struct QuadTreeNode {
Rect bounds;
Point centre;
Object * staticObjects;
Object * movableObjects;
QuadTreeNode * parent; // null if root
QuadTreeNode * children[4]; // null for unallocated children
};


We store the movable objects separately because we don't have to check if the static objects are going to collide with anything.


We are modeling all objects as axis-aligned bounding boxes (AABB) and we put them in the smallest QuadTreeNode that contains them. When a QuadTreeNode a lot of children, you can subdivide it further (if those objects distribute themselves into the children nicely).


Each game tick, you need to recurse into the quadtree and compute the move - and collisions - of each movable object. It has to be checked for collisions with:



  • every static object in its node

  • every movable object in its node that is before it (or after it; just pick a direction) in the movableObjects list

  • every movable and static object in all parent nodes


This will generate all possible collisions, unordered. You then do the moves. You have to prioritise these moves by distance and 'who moves first' (which is your special requirement), and execute them in that order. Use a heap for this.


You can optimise this quadtree template; you don't need to actually store the bounds and centre-point; that's entirely derivable when you walk the tree. You don't need to check if a model is within the bounds, only check which side it is of the centre-point (a "axis of separation" test).



To model fast flying things like projectiles, rather than moving them each step or having a separate 'bullets' list that you always check, simply put them in the quadtree with the rect of their flight for some number of game steps. This means that they move in the quadtree much more rarely, but you aren't checking bullets against far off walls, so its a good tradeoff.


Large static objects should be split into component parts; a large cube should have each face separately stored, for example.


idioms - to have somebody down as somebody


Is the idiomatic expression to have somebody down as somebody a primarily British phrase?


For example:



  • I never had you down as a Luddite.


If so, is there a corresponding American idiom?




modifiers - What is the difference between an adjective before the noun and after the noun?


For a long time I'm having trouble understanding the difference between two kind of expressions like those below in terms of meaning, not grammar:




  1. Excited people are looking forward to seeing this event.

  2. People excited are looking forward to seeing this event.




EDITED TO USE CLEARER EXAMPLES:




  1. "All navigable rivers are being patrolled."

  2. "All rivers navigable are being patrolled."



As a native speaker, how do these expressions in bold sound to you? Is there any difference in meaning between examples #3 and #4? If so what is it, and why is this so?




Answer



The problem is that grammar is somewhat tied to meaning here. The position of an adjective in a sentence depends on its role.


When used attributively (to describe a noun), as stated in other comments and answers, the adjective comes before the noun:



All navigable rivers are being patrolled.



If you say:



All rivers that are navigable are being patrolled. (Others are not)




This can become:



All rivers navigable are being patrolled.



At first glance this doesn't really seem to change the meaning since:


rivers that are navigable = navigable rivers




Edit: But...


When an adjective comes after the noun it describes (like in the 3rd example), it functions as a postpositive modifier. Changing the position of the adjective (relative to the noun it describes) may bring a slight difference in the meaning of the sentence (the meaning of the word itself does not change!). When used postpositively an adjective connotes an ephemeral quality, one that is present at the moment, but doesn't always have to be. On the other hand, the adjectives used attributively may express either an ephemeral or a permanent characteristic, depending on the context. The difference between attributive and postpositive use of an adjective is explained in more detail in (the middle of) this post and in the comments.





Only some adjectives can be used both attributively and postpositively (while retaining the same word meaning), and these are the ones ending in -able and -ible (such as navigable). (But not even all of those - see later: responsible).


To cover another aspect (this is where grammar kicks in again): if an adjective is used predicatively (in a pattern: subject + verb + object + complement (here an adjective)) it would be in a sentence like this:



Signalisation on the banks made rivers navigable. (Or something like that, I'm not really an expert on rivers).


The upcoming event made people excited.



The meaning of some adjectives (when used as modifiers) changes depending on whether they are used attributively or postpositively. Some examples are: concerned, responsible, present etc. Neither navigable nor excited are among those. Here the meaning of the word itself changes and the difference can be determined by checking the dictionary definitions.


Saturday, August 26, 2017

ios - When should you roll your own game engine?



I've been a software developer for 5 years, now, and I am wanting to get into iOS game development. I've played around with the iOS SDK for about 2 years, attending cocoaheads meetings, and I feel I have a good grasp on objective-c, cocoa and even c and c++.


I have a game idea, and know that I will use Box2D, but I'm wondering if I should use cocos2D or not. The main reasons are:



  1. I may want to do things, graphics wise, that aren't available in cocos2d.

  2. If I roll my own game engine, I'll have more control.



Of course, the main reason for using an already-existing game engine is the time it saves, and it makes the hard stuff easier; but for someone who has the technical chops to roll his own, does it make sense?



Answer



Most of the other posts will be "make a game not an engine", but I'm going to assume that you have a particular game in mind you want to make and want to know when it's a good idea to start with somebody else's code base or start from scratch.


You shouldn't roll your own tech unless you know you need to roll your own. That may sound flippant but it's really the only correct answer. As with most decisions, there are tradeoffs. Only you can determine for your particular situation the cost/benefit analysis.


You should have an understanding of the following things (this list is hardly all inclusive).



  • What middleware is already out there that you could use ("engine" or otherwise)

  • What that middleware brings to the table, feature wise.

  • How mature/proven the middleware is, especially if you care about multiplatform support

  • What kind of tools the middleware provides, or doesn't provide, to help speed up development (don't discount tools with your own tech)


  • What limitations that middleware has (as a simple example, Unity 3.x didn't do real time shadows from dynamic lights on iOS)

  • What specific features your particular game has to have.

  • What your deadlines are, and how much time you will have to spend to get up to the point of where the middleware will get you vs. how much the middleware costs.

  • How extensible the middleware is (for example, you can get around the shadow problem on iOS in Unity by using blob shadows. Or maybe projection shadows.)


(Notice that I specifically didn't put "more control" up there. That's a loaded phrase that could range from "I don't like code I don't write" to "I need to be able to see, understand, and tweak all the variables in the physics engine to achieve this particular effect." The first one isn't really a valid consideration, but the second is.)


Personally, I find that rolling your own tech for a low-budget game is hardly ever worth the effort. The amount of power you get the cheap engines these days is ridiculous. You're not at a point where you're deciding on a multimillion dollar triple A engine license or not. You're not going to be able to beat what, say, Unity offers to you for $3k. Or Cocos2d for whatever it costs (isn't it free?).


Now, if your game is mostly focused around some kind of tech that other engines can't provide, or can't provide at a reasonable framerate, then it might be worth investigating what you can do. That doesn't mean you throw out the other middelware entirely, though. Just because you need your own, say, renderer, doesn't mean you can't use some other middleware for physics or sound or UI or what have you.


complementation - What is the differences between 'try to' and 'try to+verb' and 'try + verb'?





  1. I try to ride a bike.

  2. I try riding a bike.

  3. I try ride a bike.


What is the difference between above three sentences? Please, tell me about it.




word choice - As, when or while?


What is the right way to say this:



The birds were singing as/when/while Jill stopped on the old wooden bridge to look down at the ducks.




It'd be good if you gave a little explanation as well.




Friday, August 25, 2017

What are the benefits of binary format when storing map info?



I am developing isometric 2D exploration game. At this moment I faced with a problem where game takes too much disk space. Game's world at this moment is about 1 square kilometer and it's about 50MB.


I need to somehow squeeze it. Should I think about compressing it to archive or maybe there's some kind of game files packing technique?


What about binary? Can someone explain me the magic behind it? I heard that a lot of people use it, but when I tried to use it - it took same amount of space like simple .txt file.


I'm new on file formats, so I would be grateful for any ideas. Thanks.



Answer



Use the organization of the data to your benefit. You can always be expect the data in the same order, so you know what the next bytes belong to. For example (not specific to your data), when reading in the data, always expect two bytes for tile type, two bytes for lighting information and then two bytes for extra info. So it knows that after 6 bytes, it's time to move onto the next tile. Don't store strings for your tile types, with two bytes you have many thousands of different types. It can take two bytes just to store one character depending on the format.


Don't store position information, it should be implied in the tile order. Always store the tile information in chunks, one column at a time (or one row at a time). This allows you to know the position of the next tile, without needing to read it from the byte stream. You read the starting position of the chunk, then the first tile is placed at that position. Then you know the second tile will be placed at the chunk position plus one in the Y direction (if sending column per column).


If your goal is to make the file smaller, you are probably going to have to give up the human readable feature (which is what your current formatting looks like it was designed for).


As for defining tiles as numbers. You just create a "conversion chart" in your code:


Decimal : Name     : Byte Value

0 : None : 0000 0000
1 : Dirt : 0000 0001
2 : Grass : 0000 0010
3 : Snow : 0000 0011
...
230 : Lava : 1110 0110

If you have less than 256 different types of tiles (dirt, grass, sand, etc.), which is likely, you should just write a single byte to store the value of the tile.


When you write the data, make you're writing in a binary format. If you're writing characters as numbers, you're not doing it right. When you read the data back in, you check the byte value against your chart, and load the appropriate tile in to the game.


Look into how your language of choice can write to a byte stream, or write bytes directly. It's clear that you're writing text to your file because you have new lines for each tile. A binary format is probably going to be unreadable for your average text file editor. But your game can read it, so it doesn't matter.



Preposition confusion - Do you learn somthing 'at' school OR 'in' school?


I know that we use the preposition at to describe a place. I learned French at school is one of the examples I can think of.


I came across two examples that use the preposition 'in' with 'schools' in the context of 'learning'.


Example #1 from the Business Illinois website:




Eleven things that you will never learn in school.



Example #2 from The WorldBank Blog



Do you think the skills you learn in school will help you get the job you want?



The use that I had in my mind is here on the New York Times



Should Parents Control What Kids Learn at School?






unity - Tile manipulation script doesn't work in build but works in editor


I have 100 cubes which makes a plane, and I have a script that moves those cubes in random patterns according to random chosen methods.


For example, in the method LeftPartFalls, only the cubes on the left part of the plane change materials, then fall down, then change materials again, then rise back up.



In the other methods I do the same actions but in the different patterns like:



  • only the right part falls

  • only the first, second, third, fourth and fifth lines fall


This all works fine when I test it in the Unity editor.


But when I build it for Android, it does the changing color and falling down things but not in the pattern that I wanted. Instead I get a kind of mosaic pattern, and I noticed it does that it does this in 2 random-looking mosaic patterns. For example, the first might look like this:


Example of random mosaic pattern


And after the cubes that are low in that screenshot rise back up, the pattern reverses, with the cubes that were high taking their turn to fall. And then the cycle repeats.


Why is this behaviour different on Android than in editor, and how can I fix it?





This is one of my methods to make a pattern of cubes change colour, fall, and rise:


void LeftPartFalls()
{
if (LeftPartMat)
{
for (int i = 0; i < Mathf.Min(5, FirstLine.Length); i++) // COLOR CHANGE
{
FirstLine[i].gameObject.GetComponent().material = greyMAT;
SecondLine[i].gameObject.GetComponent().material = greyMAT;

ThirdLine[i].gameObject.GetComponent().material = greyMAT;
FourthLine[i].gameObject.GetComponent().material = greyMAT;
FiftLine[i].gameObject.GetComponent().material = greyMAT;
SixthLine[i].gameObject.GetComponent().material = greyMAT;
SeventhLine[i].gameObject.GetComponent().material = greyMAT;
EighthLine[i].gameObject.GetComponent().material = greyMAT;
NinthLine[i].gameObject.GetComponent().material = greyMAT;
TenthLine[i].gameObject.GetComponent().material = greyMAT;
}
}

if (!LeftPartMat)
{
for (int i = 0; i < Mathf.Min(5, FirstLine.Length); i++) // COLOR CHANGE 2
{
FirstLine[i].gameObject.GetComponent().material = whiteMAT;
SecondLine[i].gameObject.GetComponent().material = whiteMAT;
ThirdLine[i].gameObject.GetComponent().material = whiteMAT;
FourthLine[i].gameObject.GetComponent().material = whiteMAT;
FiftLine[i].gameObject.GetComponent().material = whiteMAT;
SixthLine[i].gameObject.GetComponent().material = whiteMAT;

SeventhLine[i].gameObject.GetComponent().material = whiteMAT;
EighthLine[i].gameObject.GetComponent().material = whiteMAT;
NinthLine[i].gameObject.GetComponent().material = whiteMAT;
TenthLine[i].gameObject.GetComponent().material = whiteMAT;
}
}

if (LeftPartFall) // FALL DOWN
{
for (int i = 0; i < Mathf.Min(5, FirstLine.Length); i++)

{
FirstLine[i].gameObject.transform.position = Vector3.Lerp(FirstLine[i].transform.position, new Vector3(FirstLine[i].transform.position.x, -4f, FirstLine[i].transform.position.z), t);
SecondLine[i].gameObject.transform.position = Vector3.Lerp(SecondLine[i].transform.position, new Vector3(SecondLine[i].transform.position.x, -4f, SecondLine[i].transform.position.z), t);
ThirdLine[i].gameObject.transform.position = Vector3.Lerp(ThirdLine[i].transform.position, new Vector3(ThirdLine[i].transform.position.x, -4f, ThirdLine[i].transform.position.z), t);
FourthLine[i].gameObject.transform.position = Vector3.Lerp(FourthLine[i].transform.position, new Vector3(FourthLine[i].transform.position.x, -4f, FourthLine[i].transform.position.z), t);
FiftLine[i].gameObject.transform.position = Vector3.Lerp(FiftLine[i].transform.position, new Vector3(FiftLine[i].transform.position.x, -4f, FiftLine[i].transform.position.z), t);
SixthLine[i].gameObject.transform.position = Vector3.Lerp(SixthLine[i].transform.position, new Vector3(SixthLine[i].transform.position.x, -4f, SixthLine[i].transform.position.z), t);
SeventhLine[i].gameObject.transform.position = Vector3.Lerp(SeventhLine[i].transform.position, new Vector3(SeventhLine[i].transform.position.x, -4f, SeventhLine[i].transform.position.z), t);
EighthLine[i].gameObject.transform.position = Vector3.Lerp(EighthLine[i].transform.position, new Vector3(EighthLine[i].transform.position.x, -4f, EighthLine[i].transform.position.z), t);
NinthLine[i].gameObject.transform.position = Vector3.Lerp(NinthLine[i].transform.position, new Vector3(NinthLine[i].transform.position.x, -4f, NinthLine[i].transform.position.z), t);

TenthLine[i].gameObject.transform.position = Vector3.Lerp(TenthLine[i].transform.position, new Vector3(TenthLine[i].transform.position.x, -4f, TenthLine[i].transform.position.z), t);
}
}
if (!LeftPartFall) // RISE UP
{
for (int i = 0; i < Mathf.Min(5, FirstLine.Length); i++)
{
FirstLine[i].gameObject.transform.position = Vector3.Lerp(FirstLine[i].transform.position, new Vector3(FirstLine[i].transform.position.x, 0f, FirstLine[i].transform.position.z), t);
SecondLine[i].gameObject.transform.position = Vector3.Lerp(SecondLine[i].transform.position, new Vector3(SecondLine[i].transform.position.x, 0f, SecondLine[i].transform.position.z), t);
ThirdLine[i].gameObject.transform.position = Vector3.Lerp(ThirdLine[i].transform.position, new Vector3(ThirdLine[i].transform.position.x, 0f, ThirdLine[i].transform.position.z), t);

FourthLine[i].gameObject.transform.position = Vector3.Lerp(FourthLine[i].transform.position, new Vector3(FourthLine[i].transform.position.x, 0f, FourthLine[i].transform.position.z), t);
FiftLine[i].gameObject.transform.position = Vector3.Lerp(FiftLine[i].transform.position, new Vector3(FiftLine[i].transform.position.x, 0f, FiftLine[i].transform.position.z), t);
SixthLine[i].gameObject.transform.position = Vector3.Lerp(SixthLine[i].transform.position, new Vector3(SixthLine[i].transform.position.x, 0f, SixthLine[i].transform.position.z), t);
SeventhLine[i].gameObject.transform.position = Vector3.Lerp(SeventhLine[i].transform.position, new Vector3(SeventhLine[i].transform.position.x, 0f, SeventhLine[i].transform.position.z), t);
EighthLine[i].gameObject.transform.position = Vector3.Lerp(EighthLine[i].transform.position, new Vector3(EighthLine[i].transform.position.x, 0f, EighthLine[i].transform.position.z), t);
NinthLine[i].gameObject.transform.position = Vector3.Lerp(NinthLine[i].transform.position, new Vector3(NinthLine[i].transform.position.x, 0f, NinthLine[i].transform.position.z), t);
TenthLine[i].gameObject.transform.position = Vector3.Lerp(TenthLine[i].transform.position, new Vector3(TenthLine[i].transform.position.x, 0f, TenthLine[i].transform.position.z), t);
}
}
}


This is the enumerator part, that chooses the random method:


private IEnumerator enumerator(float waitTime)
{
while (true)
{
RandomInt = Random.Range(1, 6);
if (RandomInt == 1)
{
LeftPartCalls = true;

LeftPartMat = true;
}
if (RandomInt == 2)
{
RightPartCalls = true;
RightPartMat = true;
}
if (RandomInt == 3)
{
VerticalStripeCalls = true;

VerticalStripeMat = true;
}
if (RandomInt == 4)
{
UpLeftDownRightCalls = true;
UpLeftDownRightMat = true;
}
if (RandomInt == 5)
{
UpRightDownLeftCalls = true;

UpRightDownLeftMat = true;
}
if (RandomInt == 6)
{
HorizontalStripeCalls = true;
HorizontalStripeMat = true;
}
if (RandomInt == 7)
{
CenterCalls = true;

CenterMat = true;
}
if (RandomInt == 8)
{
EdgeCalls = true;
EdgeMat = true;
}
if (RandomInt == 9)
{
SkewCalls = true;

SkewMat = true;
}
if (RandomInt == 10)
{
HorizontalWayCalls = true;
HorizontalWayMat = true;
}


yield return new WaitForSeconds(MoveT);

if (RandomInt == 1)
LeftPartFall = true;
if (RandomInt == 2)
RightPartFall = true;
if (RandomInt == 3)
VerticalStripeFall = true;
if (RandomInt == 4)
UpLeftDownRightFall = true;
if (RandomInt == 5)
UpRightDownLeftFall = true;

if (RandomInt == 6)
HorizontalStripeFall = true;
if (RandomInt == 7)
CenterFall = true;
if (RandomInt == 8)
EdgeFall = true;
if (RandomInt == 9)
SkewFall = true;
if (RandomInt == 10)
HorizontalWayFall= true;


t = 0f;
t += Time.deltaTime / 0.7f;

yield return new WaitForSeconds(3);
LeftPartFall = false;
LeftPartMat = false;
RightPartMat = false;
RightPartFall = false;
VerticalStripeMat = false;

VerticalStripeFall = false;
UpLeftDownRightFall = false;
UpLeftDownRightMat = false;
UpRightDownLeftFall = false;
UpRightDownLeftMat = false;
HorizontalStripeFall = false;
HorizontalStripeMat = false;
CenterFall = false;
CenterMat = false;
EdgeFall = false;

EdgeMat = false;
SkewFall = false;
SkewMat = false;
HorizontalWayFall = false;
HorizontalWayMat = false;
yield return new WaitForSeconds(4);
LeftPartCalls = false;
RightPartCalls = false;
VerticalStripeCalls = false;
UpLeftDownRightCalls = false;

UpRightDownLeftCalls = false;
HorizontalStripeCalls = false;
CenterCalls = false;
EdgeCalls = false;
SkewCalls = false;
HorizontalWayCalls = false;

if (MoveT >= 1.3f)
{
MoveT -= 0.7f;

}
if (MoveT <= 1.3f)
{
MoveT -= 0.6f;
}
if (MoveT <= 0.7f)
{
MoveT = 0.7f;
}
yield return new WaitForSeconds(MoveT);

}
}

Answer



It still doesn't look like we have enough information here to identify why your game behaves differently on Android, but I'd like to recommend we start with a clean slate to hopefully make the code simpler to understand, more concise, with less room for bugs to creep in. I'll build this up in pieces so it's easier to follow:


First, let's generate your plane of cubes by script, and store it in a 2D array so we don't need to juggle ten different variables to store all the lines:


public class FallingFloor : MonoBehaviour {

public Vector2Int size = new Vector2Int(10, 10);
public float spacing = 1f;
public MeshRenderer tilePrefab;

MeshRenderer[,] tiles;

void CreateFloor() {
// Define our tiles array in the appropriate size.
tiles = new MeshRenderer[size.x, size.y];

// Spawn the floor centered around this object's position.
Vector3 origin = transform.position + new Vector3(size.x, 0, size.y) * -0.5f * spacing;

// For each row, and each column, instantiate a tile.

for(int x = 0; x < size.x; x++) {
for(int y = 0; y < size.y; y++) {
tiles[x, y] = Instantiate(
tilePrefab,
new Vector3(x, 0, y) * spacing + origin,
Quaternion.identity
);
}
}
}


// ...

With all the tiles in one variable, it's easy to define functions that select different patterns of tiles. Let's follow the convention that our tile selection functions will populate a list with the desired tiles:


void SelectLeftHalf(List pattern) {
pattern.Clear();
for(int x = 0; x < size.x/2; x++) {
for(int y = 0; y < size.y; y++) {
pattern.Add(tiles[x, y]);
}

}
}

void SelectEverySecondRow(List pattern) {
pattern.Clear();
for(int x = 0; x < size.x; x++) {
for(int y = 0; y < size.y; y += 2) {
pattern.Add(tiles[x, y]);
}
}

}

void SelectCross(List pattern) {
pattern.Clear();
// Make sure we don't go out of bounds on non-square maps.
int limit = Mathf.Min(size.x, size.y);
for(int x = 0; x < limit; x++) {
pattern.Add(tiles[x, x]);
int y = size.y - 1 - x;
// Don't double-add the middle tile in the case of odd sizes.

if(x != y)
pattern.Add(tiles[x, y]);
}
}

// ...

Now we can simplify the falling and colour-changing animations to single functions each that act on a selected pattern of tiles:


void ChangeAllMaterials(List pattern, Material material) {
foreach(var renderer in pattern)

renderer.sharedMaterial = material;
}

IEnumerator SlideAllBlocks(List pattern, float startHeight, float endHeight, float duration) {
float progress = 0f;
while(progress < 1f) {
progress = Mathf.Clamp01(progress + Time.deltaTime/duration);

// Compute a height to move to, with an ease-out curve.
float height = Mathf.Lerp(startHeight, endHeight, 1 - (1 - progress) * (1 - progress));


// Set all blocks in the pattern to this height.
foreach(var renderer in pattern) {
var position = renderer.transform.position;
position.y = height;
renderer.transform.position = position;
}

// Wait one frame, then resume.
yield return null;

}
}

// ...

And our master loop can then just select a pattern, call these functions on it in our desired sequence, and repeat:


public Material fallingMaterial;
public Material risingMaterial;
public float fallSeconds = 3f;
public float fallHeight = -4f;

public float riseSeconds = 4f;

IEnumerator AnimationLoop(float moveSeconds) {
// Prep our variable for tracking the pattern of tiles we're acting on.
// Since this is a local variable, we can control exactly who gets to act on it,
// so anything that changes the pattern should be easy to track down.
var pattern = new List();

while(true) {
// Each cycle, select a randomly-chosen pattern of blocks.

int selection = Random.Range(1, 6);

switch(selection) {
case 1 : SelectLeftHalf(pattern); break;
case 2 : SelectRightHalf(pattern); break;
case 3 : SelectEverySecondRow(pattern); break;
case 4 : SelectEverySecondColumn(pattern); break;
case 5 : SelectCross(pattern); break;
}


// Change the material of all blocks in this pattern.
ChangeAllMaterials(pattern, fallingMaterial);

// Wait before we start to fall.
yield return new WaitForSeconds(moveSeconds);

// Chain control to our sliding method until the fall is complete.
yield return SlideAllBlocks(pattern, 0, fallHeight, fallSeconds);

// Done falling. Reset the materials and rise back up.

ChangeAllMaterials(pattern, risingMaterial);

// Chain control to our sliding method until the fall is complete.
yield return SlideAllBlocks(pattern, fallHeight, 0f, riseSeconds);

// Everything has risen and reset. Now adjust our moveSeconds for next cycle:
if (moveSeconds >= 1.3f) {
moveSeconds -= 0.7f;
} else if (moveSeconds <= 1.3f) {
moveSeconds = Mathf.Max(moveSeconds - 0.6f, 0.7f);

}

// Wait before starting the next cycle & choosing a new pattern.
yield return new WaitForSeconds(moveSeconds);
}
}

Now we don't need to rely on a small army of bool variables to coordinate all our actions. You should find an approach something like this makes your game behave more predictably, and makes it less laborious to add or change behaviour.


Simple past, Present perfect Past perfect

Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...