Tuesday, April 30, 2019

c# - Is a custom coordinate system possible in Unity


Is it possible to create a custom coordinate system (i.e the one using double for coordinates or the one dividing the world into 'chunks' of safe size) not constrained by the floating point precision limitations in Unity, and make systems using the built-in coordinate system use that one instead somehow?


Thanks!



Answer



If what you want is to bypass the floating inaccuracy caused by single point precision problem, for the sake of creating bigger environments for your game, then it depends on what you are willing to accept as a solution.


Let's start by making this clear: it is impossible to alter the coordinate system at the inner core of Unity. So, you can't use double-precision inherently for coordinates.



1) Depending on how you need it, however, it is possible to have custom classes in double-precision that might be of help. Of course it sill converts back to float so Unity system can work with, which is not a solution, but in many cases it might be of help even if for implementing other solutions. By the way, someone has already implemented that for Unity: a fake Vector3d, i.e. a fake Vector3 with double precision: https://github.com/sldsmkd/vector3d


And for a discussion on that piece of code, please see: http://forum.unity3d.com/threads/double-precision-vector3.190713/


2) As for the second part of your question, yes, it is possible to divide the world in areas to bypasse the problem for having big environments. But that still uses Unity's usual single precision coordinate system. That trick was first implemented in the game Dungeon Siege. The guy behind that wrote a paper on it:


http://scottbilas.com/files/2003/gdc_san_jose/continuous_world_paper.pdf


And also gave lectures about it too, from which slides are here:


http://scottbilas.com/files/2003/gdc_san_jose/continuous_world_slides.pdf


There is a video from a couple of years ago, where people from Unity comment on that solution and even explain a modern implementation of the concept in Unity:


https://www.youtube.com/watch?v=VKWvAuTGVrQ


Also, there was already a similar question in Unity QA, with an user answer that might also enlighten you on the small piece related to coord-conversion:


http://answers.unity3d.com/questions/355721/custom-world-coordinate-system-changing-spaceworld-1.html



3) Also, see this blog entry, with very interesting stuff on the problem and an idea on using multiple cameras to handle it


http://www.davenewson.com/dev/unity-notes-on-rendering-the-big-and-the-small


4) Lastly, there is also the now famous solution implemented by Kerbal Space Program, which was a game made with Unity. It is not a new solution and is sometimes refered to as "Futurma method", being related to a broader solution called "floating origin".


First of all, see this general discussion on your problem with a related idea using offsets from the player position:


http://www.udellgames.com/posts/size-matters-and-precision-too/


Specifically on Kerbal, you can see:


http://forum.kerbalspaceprogram.com/entries/54-Scaled-Space-Now-with-100-more-Floating-Origin


And also: https://www.youtube.com/watch?v=mXTxQko-JH0


For a good (although non-professional) tutorial on how to implement it:


https://www.youtube.com/watch?v=VdkkTHV_5H8



And for a script implementing Floating Origin in Unity, see:


http://wiki.unity3d.com/index.php/Floating_Origin




Lastly, a definitive must-read is the following paper, which proposed the Float Origin years ago and even describes and compares it to the other solutions such as dividing space into chunks with their own coordinates:


http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.471.7201&rep=rep1&type=pdf


xna - In Monogame, why is multiple tile drawing slow when rendering in "windowed fullscreen"?


I have this drawing function (recommended as a solution here). It draws tiles on the whole window with no problem but my game slows down to ~30fps after maximizing it to "windowed fullscreen", which is 1600x1200 on my PC. (So 1600/16 * 1200/16 = 7500 tiles in the window)


While I want to add more advanced stuff to this simple rendering I wonder if it is possible to draw this tiles in instantiated way in monogames? (Like in OpenGL 3.3) Or is there any other way to speed it up?


BTW. Lowering the number of tiles is not a way and i need to update whole screen per frame because of incoming transparent GUI.


protected override void Draw(GameTime gameTime)
{
frameCounter++;

GraphicsDevice.Clear(Color.CornflowerBlue);


spriteBatch.Begin();

for (int x = 0; x < Window.ClientBounds.Width / 16; x++)
{
for (int y = 0; y < Window.ClientBounds.Height / 16; y++)
{
int offX = x + mainCamera.Position.X;
int offY = y + mainCamera.Position.Y;


Tile t = gameMap.GroundTiles[offX, offY, mainCamera.Position.Z];
spriteBatch.Draw(t.Texture, new Vector2(x * 16, y * 16));

}
}

spriteBatch.End();

base.Draw(gameTime);
}


enter image description here



Answer



I suspect your issue lies in the way a sprite batch works in MonoGame. The performance cost is coming from using different textures for each tile.


Let's take a peek into the MonoGame source code and see what's going on. If you follow the code down through SpriteBatch.End you eventually end up in the SpriteBatcher.cs class around about here:


https://github.com/mono/MonoGame/blob/develop/MonoGame.Framework/Graphics/SpriteBatcher.cs#L243


        foreach ( SpriteBatchItem item in _batchItemList )
{
// if the texture changed, we need to flush and bind the new texture
if ( item.TextureID != texID )

{
FlushVertexArray( startIndex, index );
startIndex = index;
texID = item.TextureID;
GL.BindTexture ( All.Texture2D, texID );
}
// store the SpriteBatchItem data in our vertexArray
_vertexArray[index++] = item.vertexTL;
_vertexArray[index++] = item.vertexTR;
_vertexArray[index++] = item.vertexBL;

_vertexArray[index++] = item.vertexBR;

_freeBatchItemQueue.Enqueue ( item );
}
// flush the remaining vertexArray data
FlushVertexArray(startIndex, index);

As you can see from this code, the way it batches things up internally is by checking the TextureID of each item in the _batchItemList and if it has changed, flushes the vertex array and binds a new texture.


This approach is pretty typical of how sprite batching works in most 2D engines (from my understanding) and usually turns out okay. However, if you have too many texture switches it can be pretty costly on performance.


The usual way to deal with this in a tile based engine is to use a texture atlas. That is, a single texture that stores all your tiles and during rendering you pick the tile's "rectangle" from the source texture.



So it's okay to have a few different textures in your game, but try to keep the texture switching to a minimum. For example, 1 texture per layer in tile based game should be fine (assuming you draw each layer one at a time) but for tiles within a single layer, try store them all on a single texture.


performance - Why do we use physics engines for collision testing or raycasting?


There is a thing I don't understand about game engines: why it is so common to use physics engines to do raycasting or collision testing?


Say that you have a 3D scene loaded in your scene manager (Ogre or whatever) and you have to do some raycasting. Is it really efficient to load all your objects in the physical or collision world just to test them for intersection? Why do game engines don't implement the same structures so you don't need to load a physics engine? This sounds like a waste of resources, especially if you don't need physics.



Answer





Is it really efficient to load all your objects in the physical or collision world just to test them for intersection?



Yes it is efficient... for the programmer. There are lots of physics engines around and it's far easier to just use one - than to strip one down, or implement raycasting yourself.


Talking about code (in fairly rough terms):


You've got the code to create physics objects (you're not going to coldet/raycast against fully triangulated models, are you?) in a data structure that is efficient for doing coldet/raycast (you're not going to bruteforce it, are you?). And then some wasted code for various physical properties (and depending on the physics engine you can probably avoid much of this). Plus the CPU and memory requirements for the same (relatively small).


Then you've got the raycasting code. You need that in any case.


And finally the rest of the physics simulation code - which just sits in its binaries, never executed, never loaded to cache. Code doesn't take up much space anyway.


So the "wastage" here is utterly minuscule. Especially compared to the massive effort involved in replacing or stripping down half a physics engine.


Game architecture for someone with a background in LOB Apps


I've got a background that is almost entirely based around business applications - Web services, schedulers, desktop and web front-ends to CRM systems, etc...



Now with almost all of the above projects, the basic principles are the same:


Some sort of data access layer, business logic layer and a UI.


Obviously some scenario require something a little unique but in general it's N-Tier all the way.


I'd like to do some game development as a hobby. I'm not expecting anything impressive as I don't have the resources to dedicate to it, but something to challenge me a bit would be good.


What lessons (if any) should I be taking from my current experience and what do I need to learn again?


I'm assuming that as with all my experience, different types of games will have different architectures but are they all based around the same core principles? For the sake of argument, let's say I'm building a simple MUD (maybe a top-down UI like the older Zelda games) - This seemed like something that I could have my 3-Tier logic for - A server with the BLL and DAL and a client UI - But I'm not quite sure if this is right - certainly using the Entity Framework doesn't seem appropriate as there's an awful lot of overhead in accessing lots of stuff in the Db and I'd imagine performance will be an issue - eg I'm assuming I wouldn't want to constantly use the Db to store player locations if they're changing 20+ times/second...


Are there patterns and practices specifically for game scenarios?


Is it feasible to develop the back-end system before creating a UI (eg plugging a console app in instead to allow me to develop the functionality I'd like before adding the UI). Is this good/bad practice?


In short, I don't know where to start and would appreciate some advice - especially from those with experience.


About the only thing that's set in stone is that I'd like a multi-user game with a central server. Game suggestions welcome.



[This was originally asked on SO but it was suggested that this would be a better forum. Apologies if this breaks any rules but I don't know how to link across sites. If someone can advise me of the appropriate action, I'll take it. Many thanks]



Answer



I don't think there really is a way to answer your questions. Personally I think it's too broad a topic to really talk about.



What lessons (if any) should I be taking from my current experience and what do I need to learn again?



Well if you can think like a programmer and have learned to problem solve then great. Maybe you'll have an understanding of what makes a good end-user UI. Database knowledge might be a good foundation for... something... The rest of it (other than probably the web communication layer) you can throw away. Real time game programming is almost totally different than application programming. Maybe there might be a little bit of convergence when you start throwing multiplayer and server communication in the mix, but it's a significantly harder problem than just a local-only game.



Are there patterns and practices specifically for game scenarios?




Yes. But most of them are subjective.


There is a huge gamut of different patterns in making a game. Some related to 3D graphics (and all its various subgenres like lighting, filtering, shaders, etc. etc. etc.). Some for features like streaming in level content. Lots of books on collision paradigms. There's a battle for OO vs. data-driven design. Components vs inheritance hierarchies. People can't even agree on what is the best way to pump updates to entities in a world (fixed rate or passing in per frame deltas, for one part of the argument). People have different opinions about whether or not generic programming is a good thing. There's no way to really answer that question as stated (other than "yes"), it's too broad.




I mean, if you have specific questions on how to do certain things, that is what this site is designed for. For example, maybe you want to know about how to sync up client side movement with a server and have it not be choppy. Maybe you want to ask if the client should be authoritative on movement or the server should be.


My advice to new people starting games is just to do it. Nobody can help you make your decisions. You can be spinning your wheels forever trying to do research on the "best way" to do things. But that really isn't important. What's important is building up experience on your own in order to get to the point where when you need to make a decision you can make an informed decision, and to keep an analytical mind open when you're working on things to see what problems and solutions crop up.


For your example, talking about pushing updates to a database. Just do it. Then profile it. If it's slow, think about ways around it. If it isn't fast enough, start thinking about ways to handle it. If you need direction, then come to us for help.


Sunday, April 28, 2019

architecture - How to structure game states in an entity/component-based system


I'm making a game designed with the entity-component paradigm that uses systems to communicate between components as explained here. I've reached the point in my development that I need to add game states (such as paused, playing, level start, round start, game over, etc.), but I'm not sure how to do it with my framework. I've looked at this code example on game states which everyone seems to reference, but I don't think it fits with my framework. It seems to have each state handling its own drawing and updating. My framework has a SystemManager that handles all the updating using systems. For example, here's my RenderingSystem class:


public class RenderingSystem extends GameSystem {

private GameView gameView_;

/**
* Constructor
* Creates a new RenderingSystem.

* @param gameManager The game manager. Used to get the game components.
*/
public RenderingSystem(GameManager gameManager) {
super(gameManager);
}

/**
* Method: registerGameView
* Registers gameView into the RenderingSystem.
* @param gameView The game view registered.

*/
public void registerGameView(GameView gameView) {
gameView_ = gameView;
}

/**
* Method: triggerRender
* Adds a repaint call to the event queue for the dirty rectangle.
*/
public void triggerRender() {

Rectangle dirtyRect = new Rectangle();

for (GameObject object : getRenderableObjects()) {
GraphicsComponent graphicsComponent =
object.getComponent(GraphicsComponent.class);
dirtyRect.add(graphicsComponent.getDirtyRect());
}

gameView_.repaint(dirtyRect);
}


/**
* Method: renderGameView
* Renders the game objects onto the game view.
* @param g The graphics object that draws the game objects.
*/
public void renderGameView(Graphics g) {
for (GameObject object : getRenderableObjects()) {
GraphicsComponent graphicsComponent =
object.getComponent(GraphicsComponent.class);

if (!graphicsComponent.isVisible()) continue;

GraphicsComponent.Shape shape = graphicsComponent.getShape();
BoundsComponent boundsComponent =
object.getComponent(BoundsComponent.class);
Rectangle bounds = boundsComponent.getBounds();

g.setColor(graphicsComponent.getColor());

if (shape == GraphicsComponent.Shape.RECTANGULAR) {

g.fill3DRect(bounds.x, bounds.y, bounds.width, bounds.height,
true);
} else if (shape == GraphicsComponent.Shape.CIRCULAR) {
g.fillOval(bounds.x, bounds.y, bounds.width, bounds.height);
}
}
}

/**
* Method: getRenderableObjects

* @return The renderable game objects.
*/
private HashSet getRenderableObjects() {
return gameManager.getGameObjectManager().getRelevantObjects(
getClass());
}

}

Also all the updating in my game is event-driven. I don't have a loop like theirs that simply updates everything at the same time.



I like my framework because it makes it easy to add new GameObjects, but doesn't have the problems some component-based designs encounter when communicating between components. I would hate to chuck it just to get pause to work. Is there a way I can add game states to my game without removing the entity-component design? Does the game state example actually fit my framework, and I'm just missing something?


EDIT: I might not have explained my framework well enough. My components are just data. If I was coding in C++, they'd probably be structs. Here's an example of one:


public class BoundsComponent implements GameComponent {

/**
* The position of the game object.
*/
private Point pos_;

/**

* The size of the game object.
*/
private Dimension size_;

/**
* Constructor
* Creates a new BoundsComponent for a game object with initial position
* initialPos and initial size initialSize. The position and size combine
* to make up the bounds.
* @param initialPos The initial position of the game object.

* @param initialSize The initial size of the game object.
*/
public BoundsComponent(Point initialPos, Dimension initialSize) {
pos_ = initialPos;
size_ = initialSize;
}

/**
* Method: getBounds
* @return The bounds of the game object.

*/
public Rectangle getBounds() {
return new Rectangle(pos_, size_);
}

/**
* Method: setPos
* Sets the position of the game object to newPos.
* @param newPos The value to which the position of the game object is
* set.

*/
public void setPos(Point newPos) {
pos_ = newPos;
}

}

My components do not communicate with each other. Systems handle inter-component communication. My systems also do not communicate with each other. They have separate functionality and can easily be kept separate. The MovementSystem doesn't need to know what the RenderingSystem is rendering to move the game objects correctly; it just need to set the right values on the components, so that when the RenderingSystem renders the game objects, it has accurate data.


The game state could not be a system, because it needs to interact with the systems rather than the components. It's not setting data; it's determining which functions need to be called.


A GameStateComponent wouldn't make sense because all the game objects share one game state. Components are what make up objects and each one is different for each different object. For example, the game objects cannot have the same bounds. They can have overlapping bounds, but if they share a BoundsComponent, they're really the same object. Hopefully, this explanation makes my framework less confusing.




Answer



I'll admit that I didn't read the link you posted. After your edit, and reading the link provided, my position has changed. The below reflects this.




I don't know that you need to worry about game states in the traditional sense. Considering your approach to development, each system is so specific that they, in effect, are the game's state management.


In an entity system, the components are just data, right? So is a state. In its simplest form, it's just a flag. If you build your states into components, and allow your systems to consume those components' data and react to the states (flags) within them, you will be building your state management into each system itself.


It seems management systems such as the AppHub example do not apply very well to your development paradigm. Creating a super-system that encapsulates other systems seems to defeat the purpose of separating logic from data.


This might help you to understand what I mean about not having to explicitly handle game states:


http://paulgestwicki.blogspot.com/2012/03/components-and-systems-of-morgans-raid.html


rotation - How to keep my Quaternion-using FPS camera from tilting and messing up?


I am using an FPS-like camera, and it uses quaternions. But, whenever I try looking up and then sideways, it tilts, and sometimes it can turn upside down. How can I fix this?




pronouns - Can 'all' be used as a predicative complement?



"But what are you going to do with it [= dragon’s egg] when it's hatched?" said Hermione.
"Well, I've bin doin' some readin', said Hagrid, pulling a large book from under his pillow. "Got this outta the library –– Dragon Breeding for Pleasure and Profit –– it's a bit outta date, o' course, but it's all in here. Keep the egg in the fire, 'cause their mothers breathe on ‘em, see, an' when it hatches, feed it on a bucket o' brandy mixed with chicken blood every half hour. An' see here –– how ter recognize diff'rent eggs –– what I got there's a Norwegian Ridgeback. They're rare, them."
(Harry Potter and the Sorcerer's Stone)



It looks like ‘all’ is a predicative complement over ‘it’. Yet I don’t find such usage in dictionaries. How do I see the ‘all’?



Answer



"It's all in here" means, approximately, "everything you need to know is contained in this work". The phrase is frequently used in response to a question or implicit question:




How do I install a grounded wall outlet? —(handing you a manual) It's all in here.
I can't find the 1927 commerce data. —(handing you a reference book) It's all in here.



As FumbleFingers very wittily and economically points out, whenever it heads a response to a question or implicit question it often refers to the answer:



Can 'all' be used as a predicative complement? —It's a mystery to me.



So the it Hagrid uses here is not co-referent with any previous it, but refers to the answer to the question which launched the discussion, "What are you going to do with it?"


As for all, McCawley 1998 has several pages (I'm sorry, I don't remember exactly where) on how all can be moved around. All may occur:




at the beginning, in the determiner position: All of it is here.
after the noun/pronoun it modifies, in an appositive-adjective position: It all is here.
after the verb, in an adverb position: It is all here.



But they all have the same meaning. Or all of them have same meaning. Or they have all the same meaning.


Saturday, April 27, 2019

word usage - "is capable" vs. "has capability"




  1. Science is capable of wonderful things - but not always, and rarely as quickly as we would wish.





  2. Science has capability of wonderful things - but not always, and rarely as quickly as we would wish.




  3. Science has capability of doing wonderful things - but not always, and rarely as quickly as we would wish.




  4. Science has capability of making wonderful things - but not always, and rarely as quickly as we would wish.





Are 2, 3 and 4 acceptable variations of 1, which is quoted from The New York Times? Or, in any case, is 1 the best way to word that sentence and the other versions are, at best, examples of sloppy phrasing?



Answer




1) Science is capable of wonderful things - but not always, and rarely as quickly as we would wish.



This is a perfectly valid construction. Science has the ability to do/create/etc. wonderful things.



2) Science has capability of wonderful things - but not always, and rarely as quickly as we would wish.




This is nonsense. Adding 'the' before 'capability', as others have mentioned, actually doesn't help; the sentence is still nonsense. "Science has the capability of wonderful things" doesn't make any sense, because there's no verb here; science has the capability of doing what to/with/for/etc. wonderful things? Creating them, making them, doing them, destroying them...what? This is not valid.



3) Science has capability of doing wonderful things - but not always, and rarely as quickly as we would wish.



Here you've added the verb 'doing', which is great. We do need 'the' here, though. I'd also mention that, while in the first example science is capable of something, here it must have the capability to do something. The correct version would be:



Science has the capability to do wonderful things - but not always, and rarely as quickly as we would wish.



This has identical meaning to the original sentence #1, just with a lot more words. To be capable of something is the same thing as having the capability to do something.




4) Science has capability of making wonderful things - but not always, and rarely as quickly as we would wish.



Let's fix this one up similarly to sentence #3, and for the same reasons:



Science has the capability to make wonderful things - but not always, and rarely as quickly as we would wish.



This is similar in meaning to the first sentence and corrected third sentence, except it uses make where the third sentence uses do (and the first sentence leaves out the verb entirely).


You can always add the verb to the first sentence, by the way; any of these are valid:



Science is capable of (doing/making/creating/etc.) wonderful things - but not always, and rarely as quickly as we would wish.




To answer your final question, a variant of the first sentence is most likely your best choice. But you can modify it with any verb to add the specificity it seems you were trying to add in sentences three and four.


android - How to handle pixel-perfect collision detection with rotation?


Does anyone have any ideas how to go about achieving rotational pixel-perfect collision detection with Bitmaps in Android? Or in general for that matter? I have pixel arrays currently but I don't know how to manipulate them based on an arbitrary number of degrees.



Answer



I'm not familiar with Android so I don't know what tools you have at your disposal, but I can tell you a way to implement this in general terms. How easy it will be depends on what Android provides for you. You're going to need matrices or at least they'll simplify calculations a lot.


For starters do a bounding box collision check and return immediately if they don't collide in order to avoid further computations. That's logical because if the bounding boxes don't collide it's guaranteed that no pixels will be colliding either.


Afterwards, if a pixel perfect collision check is needed, then the most important point is that you have to perform that check in the same space. This can be done by taking each pixel from sprite A, applying a series of transformations in order to get them into sprite B's local space, and then check if it collides with any pixel in that position on sprite B. A collision happens when both pixels checked are opaque.


So, the first thing you need is to construct a world matrix for each of the sprites. There are probably tutorials online teaching you how to create one, but it should basically be a concatenation of a few simpler matrices in the following order:


Translation(-Origin) * Scale * Rotation * Translation(Position)


The utility of this matrix is that by multiplying a point in local space - and for instance if you get the pixels using a method like bitmap.getPixelAt(10,20) then 10,20 is defined in local space - by the corresponding world matrix will move it into world space:


LocalA * WorldMatrixA -> World
LocalB * WorldMatrixB -> World

And if you invert the matrices you can also go in the opposite direction i.e. transform points from world space into each of the sprite's local spaces depending on which matrix you used:


World * InverseWorldMatrixA -> LocalA
World * InverseWorldMatrixB -> LocalB

So in order to move a point from sprite A's local space into sprite B's local space, you first transform it using sprite A's world matrix, in order to get it into world space, and then using sprite B's inverse world matrix, to get it into sprite B's local space:



LocalA * WorldMatrixA -> World * InverseWorldMatrixB -> LocalB

After the transformation, you check if the new point falls within sprite B's bounds, and if it does, you check the pixel at that location just like you did for sprite A. So the entire process becomes something like this (in pseudocode and untested):


bool PixelCollision(Sprite a, Sprite B)
{
// Go over each pixel in A
for(i=0; i {
for(j=0; j {

// Check if pixel is solid in sprite A
bool solidA = a.getPixelAt(i,j).Alpha > 0;

// Calculate where that pixel lies within sprite B's bounds
Vector3 positionB = new Vector3(i,j,0) * a.WorldMatrix * b.InverseWorldMatrix;

// If it's outside bounds skip to the next pixel
if(positionB.X<0 || positionB.Y<0 ||
positionB.X>=b.Width || positionB.Y>=b.Height) continue;


// Check if pixel is solid in sprite B
bool solidB = b.getPixelAt(positionB.X, positionB.Y).Alpha > 0;

// If both are solid then report collision
if(solidA && solidB) return true;
}
}
return false;
}

Does Unity let you code in Java?




I am fairly new to Unity3D experience , but I have a very good knowledge of Java and Android development. I am really confused that if Java is at all needed for developing android applications? I read somewhere in Unity documentation that adding behaviour to objects in Unity requires use of scripts, and that unity only supports C# , .Net , and Boo scripts. Is there no use of Java at all?



Answer



Java is not supported by Unity. You should check out C#, however; it's a very similar language that takes a lot of influence from Java while arguably smoothing out some of the rougher edges of the language. It should also be noted that you will need both Unity Pro and Unity Android Pro in order to create Android games using Unity. As jhocking and ashes999 note in the comments, you don't need Unity Pro and Unity Android Pro top release commercial Unity games on Android.


verbs - Does "punish" imply guilt?


In the phrase "They were hugely punished", am I implying that the punished subjects had done something wrong? If so, is there a word that has the same meaning as punish (meaning damage done to someone, physically and/or psychologycaly) but without the implied guilt?



Answer



Punish has 3 forms: Punish (verb) to inflict a punishment (noun) on a person. Punishing (adjective) use the known harshness of punishments to describe other (often unrelated) activities



Punish
VERB [WITH OBJECT]
1. Inflict a penalty or sanction on (someone) as retribution for an offence, especially a transgression of a legal or moral code.




It doesn't imply guilt, it states guilt as that is the meaning of punish.


There are hundreds of words to mean hurt somebody, hurt being the first good example.


There is a informal usage of punishment



Punishment
NOUN
[mass noun]
1.2. informal Rough treatment or handling.
‘your machine can take a fair amount of punishment before falling to bits’




or punishing



Punishing
ADJECTIVE
1. Physically and mentally demanding; arduous.


    ‘the band's punishing tour schedule’


  1.1 Severe and debilitating.


     'the recession was having a punishing effect on our business’



There is this definition for punish



1.4 Subject to severe and debilitating treatment.


BUT if you look at the example sentences they are all using punishing as an adjective, rather than Punish the verb


all definitions are from oxford dictionary




"They were hugely punished"


isn't actually correct, you can't be hugely punished, The 2 just don't go together. Huge is a counting/size qualifier and punish is a verb, there is one punish of 1 size, just different severity.


You could be severely punished, unfairly punished, mildly punished, but NOT Hugely. You could take a huge punishment because that is a noun. A thing has a size an action does not.


Friday, April 26, 2019

exporting bind and keyframe bone poses from blender to use in OpenGL


EDIT: I decided to reformulate the question in much simpler terms to see if someone can give me a hand with this.


Basically, I'm exporting meshes, skeletons and actions from blender into an engine of sorts that I'm working on. But I'm getting the animations wrong. I can tell the basic motion paths are being followed but there's always an axis of translation or rotation which is wrong. I think the problem is most likely not in my engine code (OpenGL-based) but rather in either my misunderstanding of some part of the theory behind skeletal animation / skinning or the way I am exporting the appropriate joint matrices from blender in my exporter script.


I'll explain the theory, the engine animation system and my blender export script, hoping someone might catch the error in either or all of these.


The theory: (I'm using column-major ordering since that's what I use in the engine cause it's OpenGL-based)



  • Assume I have a mesh made up of a single vertex v, along with a transformation matrix M which takes the vertex v from the mesh's local space to world space. That is, if I was to render the mesh without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v.

  • Now assume I have a skeleton with a single joint j in bind / rest pose. j is actually another matrix. A transform from j's local space to its parent space which I'll denote Bj. if j was part of a joint hierarchy in the skeleton, Bj would take from j space to j-1 space (that is to its parent space). However, in this example j is the only joint, so Bj takes from j space to world space, like M does for v.

  • Now further assume I have a a set of frames, each with a second transform Cj, which works the same as Bj only that for a different, arbitrary spatial configuration of join j. Cj still takes vertices from j space to world space but j is rotated and/or translated and/or scaled.



Given the above, in order to skin vertex v at keyframe n. I need to:



  1. take v from world space to joint j space

  2. modify j (while v stays fixed in j space and is thus taken along in the transformation)

  3. take v back from the modified j space to world space


So the mathematical implementation of the above would be: v' = Cj * Bj^-1 * v. Actually, I have one doubt here.. I said the mesh to which v belongs has a transform M which takes from model space to world space. And I've also read in a couple textbooks that it needs to be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from world to joint space. So basically I'm not sure if I need to do v' = Cj * Bj^-1 * v or v' = Cj * Bj^-1 * M * v. Right now my implementation multiples v' by M and not v. But I've tried changing this and it just screws things up in a different way cause there's something else wrong.



  • Finally, If we wanted to skin a vertex to a joint j1 which in turn is a child of a joint j0, Bj1 would be Bj0 * Bj1 and Cj1 would be Cj0 * Cj1. But Since skinning is defined as v' = Cj * Bj^-1 * v , Bj1^-1 would be the reverse concatenation of the inverses making up the original product. That is, v' = Cj0 * Cj1 * Bj1^-1 * Bj0^-1 * v



Now on to the implementation (Blender side):


Assume the following mesh made up of 1 cube, whose vertices are bound to a single joint in a single-joint skeleton:


enter image description here


Assume also there's a 60-frame, 3-keyframe animation at 60 fps. The animation essentially is:



  • keyframe 0: the joint is in bind / rest pose (the way you see it in the image).

  • keyframe 30: the joint translates up (+z in blender) some amount and at the same time rotates pi/4 rad clockwise.

  • keyframe 59: the joint goes back to the same configuration it was in keyframe 0.


My first source of confusion on the blender side is its coordinate system (as opposed to OpenGL's default) and the different matrices accessible through the python api.



Right now, this is what my export script does about translating blender's coordinate system to OpenGL's standard system:


    # World transform: Blender -> OpenGL
worldTransform = Matrix().Identity(4)
worldTransform *= Matrix.Scale(-1, 4, (0,0,1))
worldTransform *= Matrix.Rotation(radians(90), 4, "X")

# Mesh (local) transform matrix
file.write('Mesh Transform:\n')
localTransform = mesh.matrix_local.copy()
localTransform = worldTransform * localTransform

for col in localTransform.col:
file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3]))
file.write('\n')

So if you will, my "world" matrix is basically the act of changing blenders coordinate system to the default GL one with +y up, +x right and -z into the viewing volume. Then I also premultiply (in the sense that it's done by the time we reach the engine, not in the sense of post or pre in terms of matrix multiplication order) the mesh matrix M so that I don't need to multiply it again once per draw call in the engine.


About the possible matrices to extract from Blender joints (bones in Blender parlance), I'm doing the following:




  • For joint bind poses:


    def DFSJointTraversal(file, skeleton, jointList):


    for joint in jointList:
    bindPoseJoint = skeleton.data.bones[joint.name]
    bindPoseTransform = bindPoseJoint.matrix_local.inverted()

    file.write('Joint ' + joint.name + ' Transform {\n')
    translationV = bindPoseTransform.to_translation()
    rotationQ = bindPoseTransform.to_3x3().to_quaternion()
    scaleV = bindPoseTransform.to_scale()
    file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2]))

    file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0]))
    file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2]))

    DFSJointTraversal(file, skeleton, joint.children)
    file.write('}\n')


Note that I'm actually grabbing the inverse of what I think is the bind pose transform Bj. This is so I don't need to invert it in the engine. Also note I went for matrix_local, assuming this is Bj. The other option is plain "matrix", which as far as I can tell is the same only that not homogeneous.





  • For joint current / keyframe poses:


    for kfIndex in keyframes:
    bpy.context.scene.frame_set(kfIndex)
    file.write('keyframe: {:d}\n'.format(int(kfIndex)))
    for i in range(0, len(skeleton.data.bones)):
    file.write('joint: {:d}\n'.format(i))

    currentPoseJoint = skeleton.pose.bones[i]
    currentPoseTransform = currentPoseJoint.matrix


    translationV = currentPoseTransform.to_translation()
    rotationQ = currentPoseTransform.to_3x3().to_quaternion()
    scaleV = currentPoseTransform.to_scale()
    file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2]))
    file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0]))
    file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2]))

    file.write('\n')



Note that here I go for skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs I'm not super clear which one I should choose, though I think it's the plain matrix. Also note I do not invert the matrix in this case.


The implementation (Engine / OpenGL side):


My animation subsystem does the following on each update (I'm omitting parts of the update loop where it's figured out which objects need update and time is hardcoded here for simplicity):


static double time = 0;
time = fmod((time + elapsedTime),1.);
uint16_t LERPKeyframeNumber = 60 * time;
uint16_t lkeyframeNumber = 0;
uint16_t lkeyframeIndex = 0;
uint16_t rkeyframeNumber = 0;
uint16_t rkeyframeIndex = 0;


for (int i = 0; i < aClip.keyframesCount; i++) {
uint16_t keyframeNumber = aClip.keyframes[i].number;
if (keyframeNumber <= LERPKeyframeNumber) {
lkeyframeIndex = i;
lkeyframeNumber = keyframeNumber;
}
else {
rkeyframeIndex = i;
rkeyframeNumber = keyframeNumber;

break;
}
}

double lTime = lkeyframeNumber / 60.;
double rTime = rkeyframeNumber / 60.;
double blendFactor = (time - lTime) / (rTime - lTime);

GLKMatrix4 bindPosePalette[aSkeleton.jointsCount];
GLKMatrix4 currentPosePalette[aSkeleton.jointsCount];


for (int i = 0; i < aSkeleton.jointsCount; i++) {
F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i];
F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i];

GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor);
GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor);
GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor);

GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation);

currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation);
currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling);

GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q);
inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t);
inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s);

if (aSkeleton.joints[i].parentIndex == -1) {
bindPosePalette[i] = inverseBindTransform;
currentPosePalette[i] = currentTransform;

}

else {
bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]);
currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform);
}

aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]);
}


Finally, this is my vertex shader:


#version 100

uniform mat4 modelMatrix;
uniform mat3 normalMatrix;
uniform mat4 projectionMatrix;
uniform mat4 skinningPalette[6];
uniform lowp float skinningEnabled;



attribute vec4 position;
attribute vec3 normal;
attribute vec2 tCoordinates;
attribute vec4 jointsWeights;
attribute vec4 jointsIndices;

varying highp vec2 tCoordinatesVarying;
varying highp float lIntensity;

void main()

{
tCoordinatesVarying = tCoordinates;

vec4 skinnedVertexPosition = vec4(0.);
for (int i = 0; i < 4; i++) {
skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position;
}

vec4 skinnedNormal = vec4(0.);
for (int i = 0; i < 4; i++) {

skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.);
}


vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled);
vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled);

vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz);
vec3 lightPosition = vec3(0., 0., 2.);
lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition)));




gl_Position = projectionMatrix * modelMatrix * finalPosition;
}

The result is that the animation displays wrong in terms of orientation. That is, instead of bobbing up and down it bobs in and out (along what I think is the Z axis according to my transform in the export clip). And the rotation angle is counterclockwise instead of clockwise.


If I try with a more than one joint, then it's almost as if the second joint rotates in it's own different coordinate space and does not follow 100% its parent's transform. Which I assume it should from my animation subsystem which I assume in turn follows the theory I explained for the case of more than one joint.


Any thoughts?




c# - How can I build a game in Unity with minimum/no use of the visual editor?


I'd like to write a game completely in C#. In my search for an engine, I found Unity3D, but all the tutorials and documentation are speaking about a visual editor and the Unity IDE in which you click and point around to create scenes and scripts.



I don't want to do that. I prefer full code coverage over designers abstracting things away from me. I'd like to only write pure C# code from scratch or if required the Unity scripts as an addition. I couldn't find any explanation or documentation about doing so; how can I build a game using Unity and making minimum (or no) use of the visual editor?



Answer



I am a complete beginner in Unity, but this is how I do it at the moment, and it reduces the editor usage to minimum:


In editor, I only have three objects: An empty GameObject called "main", a camera, and a light. And this is only because so far I only work with a single camera and a single light, so it was faster this way. Later I will probably remove them, and only the "main" will remain.


In "Assets/MyScripts" I have a class "Main", which is added to the "main" GameObject as a behavior. Which means that when the program starts, the "Main" class in instantiated and its method is called. The "Main" class is like this:


using UnityEngine;

public class Main : MonoBehaviour
{


void Start ()
{
// initialize the game
}

void Update ()
{
// update physics
}


}

In the game I dynamically build the environment like this:


GameObject floor = GameObject.CreatePrimitive (PrimitiveType.Cube);
floor.renderer.material.color = RandomGreen ();

But this is because so far I am only making a prototype. Later I will want to replace the cubes with some nice things edited in Blender. Which will require making them in Blender, and importing them to Units as "prefabs". Then, I will similarly instantiate the "prefabs" from the C# code.


If you want to make an object react to events, such as collisions with other objects, you can instantiate the object and add them dynamically a behavior class, which is a C# class derived from MonoBehavior. For example, if you want to have a car, you make a car prefab, and write a "CarBehavior" class for its behavior in the game.


This way you can reduce the interaction with the editor to a minimum, although probably not completely to zero. Now it depends on whether this is an acceptable solution for you.


unity - Destroy chunked block like Minecraft


I am trying to do a Minecraft-like game (like every one have already tried I guess) and i went across the problem i expected: lag with world Generation. I did lot of searching but i can


I tried to combine them into a bigger mesh, which work, but not i can destroy a single block. So how can I place/destroy chunked (mesh combined) block? If i cant, can someone clearly explain me a solution (i am not fluent in english nor with programming expressions, i can code tho) and how to do it (with a script example if possible :) ) ? Thanks in advance !


I generated my world using 2D perlin noise. Once i have the blocks, i split them into 16x16x100 chunk meshes.


Before Making chunks, i first tried to combine all of them into one block (using the unity manual), but the result is only one cube placed at 0,0,0. Where are all the other cubes ?


using System.Collections;

using System.Collections.Generic;
using System.Linq;
using UnityEngine;
public class ThisWorldGen : MonoBehaviour {

public GameObject blockPrefab;
public GameObject grass;
public GameObject dirt;
public GameObject rock;
private string[] blockTag;

private GameObject stockin;


public int amp;
public int freq;
private Vector3 mypos;
[SerializeField]
private Material material;
private Vector3 blockLocations;


public List meshes = new List();

void Start()
{
Generate();
}

void Generate()
{
amp = Random.Range(0,100);

freq = Random.Range(30,100);
mypos = this.transform.position;
int cols = 100;
int rows = 100;
float startTime = Time.realtimeSinceStartup;
#region Create Mesh Data
MeshFilter blockMeshes = Instantiate(blockPrefab, Vector3.zero, Quaternion.identity).GetComponent();

for(int x = 0; x < cols;x++)
{

for(int z = 0; z < rows;z++)
{
float y = Mathf.PerlinNoise ((mypos.x + x) / freq,(mypos.z + z)/freq) * amp;
y = Mathf.Floor(y);
for(float hy = y; y > 0.0F; y -= 1.0F)
{
blockMeshes.transform.position = new Vector3(mypos.x + x,y,mypos.z +z);//move the unit cube to the intended position
meshes.Add(blockMeshes);
}
}

}
int i = 0;
MeshFilter[] listMeshes = new MeshFilter[meshes.Count];
foreach(MeshFilter cubesToAdd in meshes)
{
listMeshes[i] = cubesToAdd;
i++;
}
CombineInstance[] combine = new CombineInstance[listMeshes.Length];
int w = 0;

while(w < listMeshes.Length)
{
combine[w].mesh = listMeshes[w].sharedMesh;
combine[w].transform = listMeshes[w].transform.localToWorldMatrix;
listMeshes[w].gameObject.SetActive(false);
w++;
}
Debug.Log (combine.Length);
transform.GetComponent().mesh = new Mesh();
transform.GetComponent().mesh.CombineMeshes(combine);

transform.gameObject.SetActive(true);
#endregion
Debug.Log("Loaded in " + (Time.realtimeSinceStartup - startTime) + " Seconds.");
}
}

(the world was generating randomly and perfectly before trying to merge the meshes)




Immediate GUI - yae or nay?



I've been working on application development with a lot of "retained" GUI systems (below more about what I mean by that) like MFC, QT, Forms, SWING and several web-GUI frameworks some years ago. I always found the concepts of most GUI systems overly complicated and clumsy. The amount of callback events, listeners, data copies, something to string to something - conversions (and so on) were always a source of mistakes and headaches compared to other parts in the application. (Even with "proper" use of Data Bindings/Models).


Now I am writing computer games :). I worked with one GUI so far: Miyagi (not well-known, but basically the same Idea as all the other systems.)


It was horrible.


For real-time rendering environments like Games, I get the feeling that "retained" GUI systems are even more obsolete. User interfaces usually don't need to be auto-layouted or have resizable windows on-the-fly. Instead, they need to interact very efficiently with always-changing data (like 3d-positions of models in the world)


A couple of years ago, I stumbled upon "IMGUI" which is basically like an Immediate Graphics mode, but for user interfaces. I didn't give too much attention, since I was still in application development and the IMGUI scene itself seemed to be not really broad nor successfull. Still the approach they take seem to be so utterly sexy and elegant, that it made me want to write something for the next project using this way of UI (I failed to convince anyone at work :(...)


let me summarize what I mean by "retained" and "immediate":


Retained GUI: In a separate initialization phase, you create "GUI controls" like Labels, Buttons, TextBoxes etc. and use some descriptive (or programmatical) way of placing them on screen - all before anything is rendered. Controls hold most of their own state in memory like X,Y location, size, borders, child controls, label text, images and so on. You can add callbacks and listeners to get informed of events and to update data in the GUI control.


Immediate GUI: The GUI library consists of one-shot "RenderButton", "RenderLabel", "RenderTextBox"... functions (edit: don't get confused by the Render prefix. These functions also do the logic behind the controls like polling user input, inserting characters, handle character-repeat-speed when user holds down a key and so on...) that you can call to "immediately" render a control (doesn't have to be immediately written to the GPU. Usually its remembered for the current frame and sorted into appropiate batches later). The library does not hold any "state" for these. If you want to hide a button... just don't call the RenderButton function. All RenderXXX functions that have user interaction like a buttons or checkbox have return values that indicate whether e.g. the user clicked into the button. So your "RenderGUI" function looks like a big if/else function where you call or not call your RenderXXX functions depending on your game state and all the data update logic (when a button is pressed) is intermangled into the flow. All data storage is "outside" the gui and passed on-demand to the Render functions. (Of course, you would split up the big functions into several ones or use some class abstractions for grouping parts of the gui. We don't write code like in 1980 anymore, do we? ;))



Now I found that Unity3D actually uses the very same basic approach to their built-in GUI systems. There are probably a couple of GUI's with this approach out there as well?


Still.. when looking around, there seem to be a strong bias towards retained GUI systems? At least I haven't found this approach except in Unity3D and the original IMGUI community seems to be rather .... .. quiet.


So anyone worked with both ideas and have some strong opinion?


Edit: I am most interested in opinions that stem from real-world experience. I think there is a lot of heated discussions in the IMGUI-forum about any "theoretical weakness" of the immediate GUI approach, but I always find it more enlightening to know about real-world weaknesses.



Answer



Nay. I've done paid gamedev work on an awful 'retained mode' GUI and on an awful 'immediate mode' GUI and although both made me want to tear my eyes out, the retained mode approach is still clearly the better one.


The downsides of immediate mode are many:



  • they don't make it easy for artists and designers to configure the layout;

  • they make you mix logic with presentation;


  • they make it harder to have a single consistent input model;

  • they discourage complex controls of dynamic data (eg. list views);

  • layering becomes very awkward (eg. if I call RenderComboBox, the drop-down box can't possibly render yet because it needs to go above the rest of the window - same for tool-tips)l

  • configuring rendering style ends up being done with globals (ick) or extra function arguments (ick);

  • performance can be poor because calling all these functions to query values that may or may not have changed tends to create many small objects, stressing the memory manager;

  • ...and I can't even imagine how painful it would be to allow someone to drag bits the GUI around and re-order them. You'd have to maintain a data structure telling you where to render everything - which is much like rewriting a retained mode system yourself.


Immediate mode GUIs are tempting for lone programmers who want a quick HUD system, and for that purpose they are great. For anything else... just say no.


tense - "I have ridden a rollercoaster and now I'm dizzy" - correct or not?


I am not a native English speaker, and the sentence in the title came up yesterday when I was talking with my friend (who is a native English speaker) about English grammar.


Consider the sentence



I have ridden a rollercoaster and now I am dizzy.




(where the intended meaning is that I'm dizzy because of the rollercoaster). In my understanding of present perfect tense, it's use is justified here because I am relating an event of riding a rollercoaster by implying it is the reason I am dizzy right now. However, my friend has said for him this situation is a perfect example of when past simple tense should be used.


I would like to just take his word on that, but I don't see why present perfect is being misused in this sentence. Could anyone clarify why past simple should be used here?


PS. This friend has also mentioned this might be a matter of difference between BrE and AmE (he is American). If this is so, could someone briefly explain how the two are different when it comes to present perfect/past simple distinction?




Thursday, April 25, 2019

procedural generation - Perlin Noise for generating terrain in a 2D side-scrolling game. Is there a way to make variations in noise's amplitude?


For example if my generated levels look roughly like this:


enter image description here


But once in a while I would like to have the "amplitude" rise say 10 times than the rest of the level, so that it would look something like:


enter image description here



That is, once in a while there are deep "trenches" in the level. Now I know that my terrain's Y values are always between for example 200 and 1000 pixels. I there a way for them to mostly be in that range, but once in a while there is a drop to for example 10000 pixels?



Answer



Add another layer of noise to control the amplitude. Scale the noise up (on the X axis) to make the changes in amplitude gradual. Further, you can apply the amplitude changes in a exponential fashion. By applying them in this way, the difference in the noise values of .3 to .4 are not nearly as significant as the difference in the values .9 to 1. This strategy ensures that you do get some deep trenches, but you don't get them frequently.


Note that this method can easily be applied to also cause high mountains if desired.


sentence usage - Overeating and eating too much


Suppose a mother is advising her son who eats too much; then which one of the following sentences would be correct:




  • Gluttony is harmful to one’s health.

  • Overeating is harmful to one’s health.








  • Don’t eat too much.

  • Don’t over eat.



For me all of the above sentences sound idiomatic and natural.



Answer



Gluttony, being one of the Seven Deadly Sins, is a severe form of overeating and is indeed harmful to one's health. Such people are referred to as gourmands.


One would usually advise against overeating before advising against gluttony.



In possible order of severity



eating
eating too much
overeating
stuffed
can't eat anymore
gluttony



One can use the admonishments




Don’t eat too much.
Don’t over eat.



because



Overeating is harmful to one’s health.



People are not usually warned against the extreme gluttony.


The gluttony gourmand was a glutton for punishment.



meaning in context - What exactly does this sentence mean: " ... typically 10-100 times that of a plain-vanilla email campaign"




"That extra effort is what gives traditional media its added oomph—typically 10-100 times that of a plain-vanilla email campaign"



This sentence is used in below paragraph:




They work because, to your prospect, it's a given that you've put in more effort. (A printed sales pack costs you more to put together; a telephone call takes 100% of one person's attention for its duration.) That extra effort is what gives traditional media its added oomph—typically 10-100 times that of a plain-vanilla email campaign.




I think that it means: extra effort can make the media more appealing. But the next part of the sentence is ambiguous for me:




  • typically 10-100 times that of a plain-vanilla email campaign



Answer



A traditional media [campaign] (in long-established forms such as radio, television, and newspapers)
has more oomph (strength, power, [sex] appeal)
than a plain-vanilla email campaign (standard, basic, without any added features)


It's a pretty vacuous thing to say, since it's almost impossible to measure/quantify the efficacy of an advertising campaign, and totally impossible to quantify the "oomph/power/appeal" of a campaign except by using "increased sales" as a proxy.


I assume the writer means that if you run, say, a newspaper ad campaign, your sales increase will be 10-100 times greater than it would have been if you'd spent the same amount sending "spam" emails. Obviously if you run a properly-targeted email campaign (only contacting past customers, or people otherwise known to have a particular interest in your product), you'll get much better results. But that costs much more than sending spam.


adjectives - Why can "low" become "lower" and "lowest", while "up" can't?


Why can "low" become lower (comparative) and lowest (superlative), while "up" can become only comparative (upper), rather than superlative (uppest)?



The second question is what does act as a substitute for superlative of "up"? I believe that it's needed in the language.




Editing: After reading some answers here who claim that the word "up" is not an adjective and "upper" is not the opposite of "lower". I had to support my initial premise by Cambridge dictionary that shows that there is an adjective which is called "up". In addition in the same dictionary the word "upper" is marked as adjective and the word "lower" is marked there as a opposite. unlike the most of the answers here.


In addition, what's about "more up" is the following context "If you feel a bit depressed today, maybe your mood will tomorrow be more up." Is this not considered as a comparative adjective of "up"?




translation - English equivalent of French "quiproquo" (bis)



This question is related to this one and this other one, both regarding the same matter but from distinct points of view.



After reading the above posts I remained unsatisfied because of what I see as a restriction of the scope of the question.

So let me explain how I would like to expose it again.


First of all, it's clear that the English quid pro quo and the French quiproquo are mutually false friends: briefly summarized, the first one talks about actions (exchanging things, mutual behaviour, and so on) while the latter talks about situations (confusion between two persons).
This was widely commented in the posts I quoted, and it's ok.


But the precise question "What should be a good equivalent for the French quiproquo?" didn't get a real and complete answer.
In fact all posts have only took in account its sense of mistaken identity, but implicitly talkin about people only.


I agree this is the true primary sense, directly due to the litteral translation from latin: qui stands for a person (BTW here we can notice the logical consistency: quid stands for a thing, hence the different sense for the whole formula in English).
But here is the point: in today's current French, this first sense has been widened, so it now concerns not solely persons but also events or even things, in somewhat unclear limits.


Here are some examples:





  • primary sense, about persons



    -On m'a dit de m'adresser à Mr Dupont. C'est bien vous ?
    -Oui mais Dupond avec un "D" : je pense que vous voulez parler à l'autre Mr Dupont, avec un "T".
    -Excusez-moi, c'est un quiproquo.





  • about events




    -On ne t'a pas vu à la réunion hier.
    -On m'avait dit que c'était demain : il y a eu un quiproquo !





  • about things



    -J'ai allumé le chauffe-eau mais il n'y a pas d'eau chaude à la douche ! Il y en a pourtant au lavabo.
    -Il y a un chauffe-eau séparé pour la douche.
    -Ah ! Si on m'avait prévenu il n'y aurait pas eu ce quiproquo.






As you can see, in French quiproquo is essentially matter of ambiguity leading to a mistake, whatever it concerns.
NOTE: French readers might criticize my comment, noting that the expansion of the concept of the person to "everything and anything" is at fault. True, but it is equally true that it is the current use!


So, again the question: is there an English equivalent which would cover this entire scope?



Answer



I think the Google translation of quiproquo into misunderstanding may be the best choice.


For example, If our phone connection is poor and I didn't hear what you said correctly, that can lead to a misunderstanding. If someone told me the time of a meeting in UTC, and I assumed it was in a different time zone and missed the meeting, that would also be a misunderstanding.


The misunderstanding can be about anything (person, event, thing), but all the cases that I can think of involve someone not interpreting information about a situation clearly. The problems with interpretation can be caused by obvious reasons, like noise over a phone connection, or by less obvious reasons, like different understandings of a word or phrase.



Mistake is somewhat related, but usually if I make a mistake, it is a wrong action or judgment that is my own responsibility. A misunderstanding has less responsibility or blame. For example, "Because I misunderstood what the professor said in class, I made a mistake and didn't format my paper correctly." The misunderstanding isn't exactly my fault or the professor's fault even though it caused me to make a mistake.


In the sense of bringing comic relief to a stage play, the only term I can think of is "comedy of errors". While the literal definition refers to a play or other narrative work, it can be used figuratively to describe a situation. Usually "comedy of errors" refers to many misunderstandings though, not just one, and often to a chain of errors, where one misunderstanding causes another which causes another.


paraphrasing - Singular vs plural + per + noun?



I would like to paraphrase a sentence:



"This is how the costs of each course should be."



So my idea was:



"This is how the price/prices per course should be."




What are the differences between singular + per vs. plural + per?



Or


If you have better paraphrases, you're very welcome to share them.


Thank you very much :)



Answer



I don't know where you got the original sentence, but it's odd in two ways.




  1. The "costs" of "each". This implies that each course has more than one cost. This is possible. There may be a cost for full-time students and a different cost for part-time students, etc. But I wonder if the intent was not that each course has one cost. There are many costs, but only one for each course.





  2. "how the costs should be" Normally we would say "what the costs should be" or "how much the costs should be". "How" indicates a method, like "how to bake a cake" or "how we will find our way out". A cost does not normally have a method. It has an amount. Oh, another possibility is that the writer meant, "how the costs will be determined".




As to your paraphrase, the most likely wording is, "This is what the price per course should be." Other wordings are possible depending on just what you're trying to say.


meaning - "Being surrounded" or have been surrounded?


Here are some sentences with "Being surrounded":




  1. I love trees. Like many of us, being surrounded by trees makes me happy.



The source #1





  1. Rachel Boston Quotes "Being surrounded by love and people that care about your heart is the dream. That's what I would like on my last day."



The source #2


What I haven't understood is whether "being surrounded" is the example of "being+past participle" or being+adjective? If it's 'being+adjective, I have heard that "being+adjective" shows the reason for the the action of main clause, Is it true in above sentences? Next, does "being surrounded" refers to "have been surrounded by trees/people/love? And what's the work of "being" with "surrounded" in above sentences?




Wednesday, April 24, 2019

articles - Exxon and (∅, the) other oil majors


A quote from The Economist:



Yet Exxon and the other oil supermajors are more vulnerable than they look.



What if we insert a zero article in THE's position before the "other":



Yet Exxon and other oil supermajors are more vulnerable than they look.



Would the meaning change from "Exxon and all (each and every) supermajors besides Exxon" to "Exxon and some other oil supermajors (but not necessarily each of them)"?




Answer



Per JLG's comment, in OP's specific example, the zero article can be read as meaning some, but not necessarily all. And in fact since the definite article could be included, if it were to be omitted this would very strongly imply some, but not all.


But suppose we remove the reference to Exxon completely, and just consider...



1: Oil supermajors are more vulnerable than they look.
2: The oil supermajors are more vulnerable than they look.



Personally, I see no reason to assume any distinction at all. Either version could continue with further text making it clear that the vulnerability inherently applies to all such companies (because of their very nature, perhaps). Equally, either version could continue with further text explaining how a few such companies have managed to avoid being vulnerable. In short, context is everything (as usual! :)


adverbs - Can we use "then" before the main verb? "... we'll then go."




Each page takes less than a minute to produce, although for colour pages four versions, once each for black, cyan, magenta and yellow are sent. The pages are then processed into photographic negatives and the film is used to produce aluminium printing plates ready for the presses. ROBOTS AT WORK



but I'm always use then between to phrases (when it is mean: next or after that).



Let me finish this job, then we'll go.



My question is: Can we use it before tha man verb?



Let me finish this job, we'll then go.





Answer



Adverbs of time can be placed at the start of a sentence, after an auxiliary verb/modal, before a main verb or at the end of a sentence:


Soon we must make decision
we must soon make a decision
we must make a decision soon.


With so many choices of location, even small things affect the placement. In the case of your sentence, one of the things that may affect it is the rhythm. For this meaning, the word then must be a strong syllable, and in the first sentence go is also a strong syllable, giving a nice rhythm.



Let me finish this job, then we'll go.




The second sentence also requires go to be a strong syllable, which puts two strong syllables next to each other: we generally avoid this rhythm if possible.



Let me finish this job: we'll then go.



If we add some more information, for example a where clause, go loses its stress, which moves to the where clause, and we find this rhythm much more comfortable.



Let me finish this job: we'll then go and get something to eat.



c# - How can I edit the components of an instantiated prefab?


Is it possible to edit one or more components of an instantiated prefab ?


For example if you instantiated five cube prefabs. Is it possible to reference each of the cube prefab and change a value individually ? Such as changing its position or even its scale.


If possible how can it be done ? Another request is, if this can be done. Could it be in the Update() function so when the game is running values can be experimented with.




Tuesday, April 23, 2019

xna 4.0 - How'd they do it: Millions of tiles in Terraria


I've been working up a game engine similar to Terraria, mostly as a challenge, and while I've figured out most of it, I can't really seem to wrap my head around how they handle the millions of interactable/harvestable tiles the game has at one time. Creating around 500.000 tiles, that is 1/20th of what's possible in Terraria, in my engine causes the frame-rate to drop from 60 to around 20, even tho I'm still only rendering the tiles in view. Mind you, I'm not doing anything with the tiles, only keeping them in memory.



Update: Code added to show how I do things.


This is part of a class, which handles the tiles and draws them. I'm guessing the culprit is the "foreach" part, which iterates everything, even empty indexes.


...
public void Draw(SpriteBatch spriteBatch, GameTime gameTime)
{
foreach (Tile tile in this.Tiles)
{
if (tile != null)
{
if (tile.Position.X < -this.Offset.X + 32)

continue;
if (tile.Position.X > -this.Offset.X + 1024 - 48)
continue;
if (tile.Position.Y < -this.Offset.Y + 32)
continue;
if (tile.Position.Y > -this.Offset.Y + 768 - 48)
continue;
tile.Draw(spriteBatch, gameTime);
}
}

}
...

Also here is the Tile.Draw method, which could also do with an update, as each Tile uses four calls to the SpriteBatch.Draw method. This is part of my autotiling system, which means drawing each corner depending on neighboring tiles. texture_* are Rectangles, are set once at level creation, not each update.


...
public virtual void Draw(SpriteBatch spriteBatch, GameTime gameTime)
{
if (this.type == TileType.TileSet)
{
spriteBatch.Draw(this.texture, this.realm.Offset + this.Position, texture_tl, this.BlendColor);

spriteBatch.Draw(this.texture, this.realm.Offset + this.Position + new Vector2(8, 0), texture_tr, this.BlendColor);
spriteBatch.Draw(this.texture, this.realm.Offset + this.Position + new Vector2(0, 8), texture_bl, this.BlendColor);
spriteBatch.Draw(this.texture, this.realm.Offset + this.Position + new Vector2(8, 8), texture_br, this.BlendColor);
}
}
...

Any critique or suggestions to my code is welcome.


Update: Solution added.


Here's the final Level.Draw method. The Level.TileAt method simply checks the inputted values, to avoid OutOfRange exceptions.



...
public void Draw(SpriteBatch spriteBatch, GameTime gameTime)
{
Int32 startx = (Int32)Math.Floor((-this.Offset.X - 32) / 16);
Int32 endx = (Int32)Math.Ceiling((-this.Offset.X + 1024 + 32) / 16);
Int32 starty = (Int32)Math.Floor((-this.Offset.Y - 32) / 16);
Int32 endy = (Int32)Math.Ceiling((-this.Offset.Y + 768 + 32) / 16);

for (Int32 x = startx; x < endx; x += 1)
{

for (Int32 y = starty; y < endy; y += 1)
{
Tile tile = this.TileAt(x, y);
if (tile != null)
tile.Draw(spriteBatch, gameTime);

}
}
}
...


Answer



Are you looping through all 500,000 tiles when you're rendering? If so, that's likely going to cause part of your problems. If you loop through half a million tiles when rendering, and half a million tiles when performing the 'update' ticks on them, then you're looping though a million tiles each frame.


Obviously, there's ways around this. You could perform your update ticks while also rendering, thus saving you half the time spent looping through all those tiles. But that ties your rendering code and your update code together into one function, and is generally a BAD IDEA.


You could keep track of the tiles that are on the screen, and only loop through (and render) those. Depending on things like the size of your tiles, and screen size, this could easily cut down the amount of tiles you need to loop through, and that would save quite a bit of processing time.


Finally, and perhaps the best option (most large world games do this), is to split your terrain into regions. Split the world into chunks of, say, 512x512 tiles, and load/unload the regions as the player gets close to, or further away from, a region. This also saves you from having to loop through far away tiles to perform any sort of 'update' tick.


(Obviously, if your engine doesn't perform any sort of update tick on tiles, you can ignore the part of this answers that mentions those.)


architecture - How should my game characters store their abilities/spells?


I'm new to game development and a bit confused about how to effectively store an object's access to certain spells/abilities.


The player and mob objects are all generated from the same class. However, each object may have access to different abilities and spells. How do I keep track of a specific object's access to these?


Example: My player object might have access to Fireball and Haste, while a goblin might have access to Confusion.


I followed the RogueBasin Python Libtcod tutorial, and I understand how to attach a spell function to scroll use. I'm just not sure how to emulate a character memorizing a spell and using it as an ability.



Answer



The idea is to have spell objects hold some reference to the in-code action you want that spell to do.



Python's first-class functions make this quite nice (I'll assume Python 2.7.x):


class Spell:
def __init__(self, name, description, activationFunction):
self.name = name
self.description = description
self.activationFunction = activationFunction
def cast(self, gameState):
return self.activationFunction(self, gameState)

# Create a function to be called when the spell is cast. Give it as many

# parameters as you like.
def activationFunction(thisSpell, gameState):
print thisSpell.name, 'was cast at game state', gameState

# Pass the function to a spell when instantiating it
fireballSpell = Spell('Fireball','standard spam spell', activationFunction)

# Cast the spell (with a dummy value representing the gameState parameter)
fireballSpell.cast("zero")


This prints


Fireball was cast at game state zero

You could then store instances of Spell in some class Creature if you like. You could add a caster parameter to every spell to which the creature passes self, or whatever other parameters you'd like.


Instead of defing the activationFunction, you could also create an anonymous function with lambda. (Though only if it fits on one line… Sorry if I got your hopes up.)




For language-completeness:


In JavaScript, Lua, CL, and other FP-languages or languages with first-class functions, the above works with few conceptual modifications.


For a similar idea in C#, Java, C++ and OOP-languages, the strategy pattern is a good analogue for this. (Some even write Python in this style. I wouldn't bother.)


In C or other system programming languages you could use function pointers, but those can quickly get fiddly to work with. It may just be best to declare one big function struct SpellResult castSpell(...) that takes all spell parameters, then switches on a passed-in enum spellType specifying which particular spell was meant. Values of that enum could then be stored by other game objects, for passing to castSpell.



server - Synchronize turn based browser game



I'm writing a browser game in php and Sql. I'm also using Javascript - Ajax and Mysql.


I'm stack on the battle system because I want to Synchronize the turns of the players in the battle.


What I doing is to put two players on a battle. So from the first turn a countDown of 60sec will start. What I am thinking is to use a server function to check the countdown, not to the client side. This because I think that in that case a bad synchronization will came, instead if it is the server to count down.


But.. how can I let the server do all this stuff?




Monday, April 22, 2019

actionscript 3 - How should I manage game levels in a flash games?


I am still learning and I am trying to design the game levels for my first game (a platformer).


I have some straightforward questions because I feel I am on the wrong way.


Should each level have its own class ?


The way I am trying to do this is, when one finishes the first level I set all variables to null that belong to that class, unregister all events and remove all children from the stage, call a second level constructor, without declaring any variable, new level2(); and set a static var to true "in order to save" that progress in case one wants to play that level again.


However I see the code becomes messy, I run into issues, and memory management concerns arise.


I searched for help all over the internet but I can't find any tutorial that explain it in details, so I'm trying out many ways of achieving what I want, but now I am really in need of some advices to not get lost.




sentence construction - until + perfect present / present



1.I will study hard until I get a high score on this exam.


2.I will study hard until I've gotten a high score on this exam.



I think these sentences are grammatically correct. but if I add a specific time, I think these sentences have to be only this one:



3.I will study hard until I get a high score on this exam on July 25th.




And how about these?



a. I'd like to watch all of the holocaust movies present in the world until I've become an expert on it.


b. I'd like to watch all of the holocaust movies present in the world until I become an expert on it.



Since there's no a word representing a specific time, both of them are okay, right?


And what I'd like to know is the difference between "have gotten" and "get". What I know about them is we can use "future perfect" to refer to unspecific future event that might happen before some accident. but "get" refers to future event that could be either a specific scheduled event or unspecific future event. that's why if I add a specific time mentioning, I have to use "get".



Answer



Both the sententence are grammatically correct.


The word until is used as a preposition and conjunction



When you use it as a conjunction, it means "up to the time that". You usually use the present simple in the until-clause, but it's possible to use the present perfect, without any change in meaning. If there is any subtle difference, that is in regard to the degree of emphasis.


Your second sentence with the present perfect in the until-clause puts more emphasis on the completion of the action in the until-clause.


The OP is right that both the present simple and the present perfect in the until-clause are indicative of actions or events in the future.


Simple past, Present perfect Past perfect

Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...