I'm looking for a name for my class that manipulates 4x4 matrices that handle position, rotation, and scale. Is there a common word that encompasses all three?
(I'm splitting the matrix math into its own file/class.)
I'm looking for a name for my class that manipulates 4x4 matrices that handle position, rotation, and scale. Is there a common word that encompasses all three?
(I'm splitting the matrix math into its own file/class.)
I'm making a PC Horror game in Unity, and I want to have a camera recording simulation in-game - so you can leave a creepy location anytime, and you will have a tape with what you recorded on camera while exploring.
Technically, I need to attach a second camera (hand-cam) to the character's hand, and let the player look through the viewfinder by pushing a button. But in any position, the hand-cam must record everything that is in its focus at that point - floor, stairs, rubbish, etc, whether it's lowered at hand level or raised to match the player's viewpoint.
When the game ends (player leaves or dies), I'd like to have an option to save recorded tapes to a video file (if this is not possible, then to some save file).
How can I implement such a feature in my game? Are there some special Unity tricks, or add-ons and additional modules? Saving decorations and scenes with active triggers, or just grabbing video from screen?
Below is a quick MSPaint concept of what I mean:
Answer
You need to render your scene twice.
First, you render the scene from the view of the camera to texture.
Then you render the scene from the players point of view, but with the texture applied to the cameras screen.
What changes between the two render passes is the camera transform used, while the camcorder uses the orientation & location of the camera, the players view uses the orientation & location of the players head. If you're using skeletal animation, this would be as easy as attaching the camera model to the player models hand.
Since you already render to texture then, saving a video would be as easy as saving the texture each frame. When the user chooses to save the video, you could convert this sequence of images to a video format.
I'm on the planning stages for an internal game engine I am about to start creating, which will be used for all my games going forward. But I'm struggling a bit with how it should be built.
The choices come down to: framework or library.
My basic objective is to hide engine details as much as possible to keep high level game development on scripts and config files as much as possible. But also to reuse the core engine for any tools we might develop in the future.
Frameworks can make things nice and easy for development, but then you're locked in. Libraries are good if you're only interested in a specific subsystem. But we need to glue everything together in a game by game basis.
Well there is also another, building the engine as a standalone exe that handles all game resources, and all the subsystems. Game logic (and other per game dynamic stuff) is done exclusively on scripts with config files to configure each internal engine subsystem.
Which one will give me more flexibility in the future.
Thanks.
Edit: Thanks everybody, guess I was looking at this in the wrong perspective. We can't really plan something like this without a priori knowledge of what the games need, guess this is why there is no such thing as a general engine for all genres.
I'll focus first on the game proper then iteratively analyze at the end of each game how to make decent abstractions for libraries or frameworks depending on my work-flow (or possibly that of a future team).
I am making a RTS game, and I'd like some advice on how to best render fog of wars, given what I'm already doing.
You can imagine this game as being a classic RTS like Age of Empires 2, where the fog of war will basically be handled by a 2D array telling if a given "tile" is explored or not. The specific things to consider here are :
1) I'm only doing a few draw calls to draw the whole screen, using shaders, and I'm not drawing "tile by tile" in a 2D loop
2) The whole map is much bigger than the screen, and the screen can move every frame or so
In that case, how could I draw the fog of war ? I have no issue maintaining on the CPU-side a 2D array giving the fog of war for each tile, but what would be the best way to actually display it dynamically ? thanks!
I am like 99% sure that Americans do what I said in the title, but I wanted ask you anyway. My question is: When a word ends with /rd/ after a vowel and when the next word starts with a vowel, also in the situations when /rd/ is between two vowels in a word, Americans usually make a "flap d" sound which is the exact same sound as the "flap t" sound, right? I know that the /t/ of /rt/ (like in the word "party") is flapped in those positions I mentioned but I am not 100% percent sure if the same thing happens to the /d/ of /rd/ too.
For example the /d/ sounds in the sentences like "This bird is so beautiful" , "Was that show aired in USA too?", "I never heard of him" etc. are the same exact flap sounds as the /d/ sounds in the sentences like "This part of the game is very hard", "This is sort of crazy", "I never hurt anybody" etc right?(As I said, I know that Americans make a flap in the sentences like "This is sort of crazy" which include "rt", but I am not %100 percent sure if they make a flap sound in the sentences like "This bird is so beautiful" as well which include "rd" instead of "rt".)
Or the /d/ sounds in the words "skateboarding", "ordinary", "herder", "order", "hurdle" etc. are exactly the same as the /d/ sounds in the words like "party", "mortal", "turtle", "quarter" etc. aren't they?(As I said, I know that Americans make a flap in the words like "party", "mortal" etc which include "rt", but I am not %100 sure if they make a flap in the words like "order", "herder" etc. too which include "rd" instead of "rt".)
I am used to making the flap sound instead of the standard "d" sound in those situations(unless the /d/ is the first sound of a stressed syllable) and it mostly sounds natural to me. If I make a standard /d/ sound(like the /d/ sound in the word "day") in the words and sentences I gave("order", "skateboarding", "This bird is so beautiful" etc), I don't sound like an American native English speaker, right?
Correct the sentence:
The ebb and flow of the tides / are / now understood.
In the above sentence I can go by two ways:
Or is it that since tides is given , I need to go with are. I'm quite confused. Suggestions please.
I was looking on the internet but really haven't found anything definite on this. I was writing a letter for IELTS practice and I came with this phrase:
"When I arrived to the school I really did not know what to expect.".
I'm doubtful about whether the "to" is well placed there.
I think the following option with "at" instead of "to" sounds better as in:
"When I arrived at the school I really did not know what to expect."
but I'm not sure if those two are interchangeable in this particular example.
1- The funeral is at 3.00, followed by a reception at Shawn's bar.
I saw this sentence in a tv-series. Can I say that this sentence is a reduced form of the sentences below?
2- The funeral is at 3.00, which is followed by a reception at Shawn's bar.
3- The funeral is at 3.00, which will be followed by a reception at Shawn's bar.
A1. I still can't speak English.
A2. I can't speak English yet.
B1. *I yet can't speak English.
B2. *I can't speak English still.
As far as I know, A1 and A2 are acceptable English.
But, I wonder, why are "yet" and "still" not perfectly interchangeable?
Is this a matter of grammar, style, vocabulary or usage?
Answer
First and foremost, very few words in English are "perfectly" interchangeable.
NOAD says:
still (adv.) up to and including the present or the time mentioned
yet (adv.) up until the present or an unspecified or implied time
I hadn't thought much about this before, but using the word yet suggests a glimpse into the future:
I can't speak English yet – but I won't quit trying until I do.
while using the word still suggests a glimpse the past:
I still can't speak English – even though I've been trying for 10 years!
I'll try this again; the quotes here are in italics, what follows in [brackets] is what I might infer from the speaker's choice of words:
The bus hasn't come yet [but I expect it will come soon].
The bus still hasn't come [I've been waiting such a long time!]
I think you can even combine both words to express exasperation:
We've been potty training Dora for six months now, but she still hasn't got it yet!
That wording indicates it's been a long time, but there's still hope the desired result will happen eventually. Similarly, going back to your original examples, one could say:
I still can't speak English yet!
By the way, this answer hasn't even mentioned the use of these words to mean "even", as in:
We'll have even more snow tomorrow.
We'll have yet more snow tomorrow.
We'll have still more snow tomorrow.
That's another context entirely.
I need help with a script for a very simple car that uses transform.Translate:
public class car1 : MonoBehaviour {
public PauseMenu pause;
[Space]
public Car_Script car_s;
public float speed;
public GameObject car;
void FixedUpdate()
{
if (car_s.InCar == true)
{
if(pause.isPaused == false)
{
if (Input.GetKey(KeyCode.W))
{
car.transform.Translate(Vector3.forward * Time.deltaTime * speed);
if (Input.GetKey(KeyCode.A))
{
if (speed <= 0)
{
}
else
{
if (speed > 0F && speed <= 50F)
{
car.transform.Rotate(Vector3.down, 2F);
}
else if (speed > 50F && speed < 90F)
{
car.transform.Rotate(Vector3.down, 1.3F);
}
else if (speed > 90F)
{
car.transform.Rotate(Vector3.down, 0.9F);
}
}
}
if (Input.GetKey(KeyCode.D))
{
if (speed <= 0)
{
}
else
{
if (speed > 0F && speed <= 50F)
{
car.transform.Rotate(Vector3.up, 2F);
}
else if (speed > 50F && speed < 90F)
{
car.transform.Rotate(Vector3.up, 1.3F);
}
else if (speed > 90F)
{
car.transform.Rotate(Vector3.up, 0.9F);
}
}
}
}
}
}
}
}
Whenever I drive the car it immediately goes through walls.
I have colliders on everything. The car has mesh colliders set to convex.
I've searched around quite a bit for answers and it seems that a common one is using rigid bodies. But whenever I even put a rigid body on the car, it flies out into the sky.
I am currently developing a simple 2d MMORPG. My current focus is the inventory system.
I am currently wondering if I should implement a limit on what a player character can carry. Either in form of a maximum weight, a limited number of inventory slots, or a combination of both. Almost every MMORPG I ever played limits inventory space. But plausibility aside, is this really necessary from a gameplay point of view? Maybe it would in fact improve the game experience when I just let the players carry as much stuff as they want.
tl;dr: What is the game development rationale behind limiting carrying capacity of player characters?
Edit: Thanks for all the answers so far. They all were very insightful. After your input I decided to go for a limited inventory to prevent people from carrying too many healing items and too much specialized equipment into dungeons. To avoid the problem of loot overload and having to return to the town all the time, I plan to give players the ability to send items from their inventory directly to their storage (but not the ability to retrieve them in the field). I accepted the answer by Kylotan for now, but do not let this discourage you from posting additional answers, when you feel that some interesting aspect wasn't covered yet.
Answer
Much of game design is about resource management, because deciding how best to use limited resources is an interesting choice that games can easily implement. Limiting the inventory forces players to think about the value of each item and make decisions on whether to hoard or sell their loot, and on which items to carry out into battle with them.
Many times while I talked to others or wrote some text messages I got confusion about which phrase I should use to express the actual meaning of the words
It seems . . .
and
It looks like . . .
Please help me to clarify my doubts about how to use these words correctly?
Answer
There is a subtle difference in usage between the two.
When we say "it looks like", we are talking about a quick visual inspection. If we want to investigate something casually, we say we will "take a look". The implication is that this is a brief, "at a glance" impression, and while we have some confidence we are not absolutely certain. "It looks like" is usually used to imply something we are almost certain about. There is a saying: "If it looks like a duck, and it walks like a duck, chances are... it's a duck."
When we say "it seems like", we are focusing on the impression given by the subject. There is more doubt implied in this phrasing, because it suggests you may be deceived, where "it looks like" suggests you are likely correct.
Both phrases are often used humorously. It's funny to use one of these phrases when the outcome is not in doubt. ("Well, we're all gonna die here." "Looks like.") ("That didn't work" "Didn't seem to, no.") It's also funny to see someone use one of these phrases to show they are casual in a very serious situation: https://www.youtube.com/watch?v=Bbzuu14bGgs
While playing to Fifa, it occurred to me that there is probably a license that EA pay to musicians to play their songs in their game.
Does anyone know if there is a standard license for indie games, to be able to play commercial songs in the game? Either for the intro and for the game itself, like FIFA does.
I did a brief search and the suggestion is to contact the publisher of the specific group/singer; but 90% of the time they probably won't even read such email. Is this the only way?
Of course this won't apply for free music that is distributed to be freely used for any use; my question is specifically for commercial music from decently known artists.
Answer
Usually "decently known" musicians sell all their copyrights to the record label. So these are the people you need to contract when you want a license. Such record deals are usually exclusive, so the artist themselves could not sell you the rights even if they want to. Unless it's a lesser known artist, expect to pay a very large amount of cash for the rights to their music.
You should also definitely get a lawyer for the license negotiation to make sure you are buying exactly the rights you need.
These institutions were started by Brougham and Birkbeck in the twenties at a time when, as a writer described it, “there still prevailed in many quarters a strong jealousy of any political discussion by the people, and still more of any society which proposed to assemble periodically several hundreds of the labouring classes”. Hence their founders, in their desire to conciliate opposition, banned political or religious discussion or books, and forbade newspapers.
How do the adverbs still + more compound to generate its meaning?
Answer
Even more
yet more
greater still
Noun:
There (existential there)
verb
prevailed
adverb
still
subject complement
a strong jealousy
1) of any political discussion by the people
conjunction:
and
adverb
still more (=even more)
2) of any society which...
I like baseball even/still more than I like football.
How to say that I'm not doing something out of a valid reason?
For example:
When I elected a course because my friends did. OR
When I invested in a stock because its symbol sounded like my name.
How should I describe such situation where I make a decision that's not based on solid reasoning, but rather some trivial information?
Answer
Lots of good answers so far, but somehow nobody's mentioned the great terms whim, whimsy and whimsical which seem to me to closest fit your examples. Picking a stock because its symbol sounded like one's name is a great example of whimsy, a totally whimsical thing to do, something one did on a whim.
Unlike all the terms mentioned so far, there's no negative connotation to whimsy and whimsical. Whim can have a negative connotation, but doesn't necessarily.
The aforementioned capricious is also good, but perhaps more pertinent is caprice, "a sudden and unaccountable change of mood or behavior" which has less negative connotation than capricious (which can go either way). If the mood aspect is applicable, you may also want to check out mecurial which, before it became an SCC meant in thrall to one's mood of the moment.
Addendum: Also in a similar vein: the idioms on a lark, for a lark, for kicks, and for kicks and giggles.
May I know are the following two sentences grammatical and idiomatic?
I just went to Venice last September
I've just gone to Venice last September
I came across these two sentences and asked myself whether I could always use both versions:
I didn't know Ed was Welsh.
Did you know that Cliff's wife is Canadian?
Both relates to someone's nationality. I assume both people still live. So could I say I didn't know Ed is Welsh? Or as I know now - do I have to use the simple past (was) here?
Answer
Mixing present and past happens in some occassions. You can use present simple instead of a past tense when something is permanently true as in
He taught me that knowledge is power.
He told me the Earth goes round the sun.
He said he loves icecream.
In your example also if the guy said that he was Welsh, he's still Welsh I suppose at least till next time that I see him to make sure he's still Welsh or has taken refuge in another country to be the national of that country.
You can use both sentences then:
I didn't know Ed was Welsh.
I didn't know Ed is Welsh.
A)They had been drowned.
B)They had drowned.
What is the difference between the above two sentences.
(They were looking for the dead bodies of the three boys because they assumed that they 'had been drowned'/ 'had drowned'.)
Which of the two options sound appropriate?
What does "whence" mean in the following?
They returned whence they had come.
The Oxford Advanced Learner's Dictionary defines it as "from where":
But Oxford Dictionary Online lists as one of its senses "to the place from which."
If we follow the Oxford Advanced Learner's Dictionary's definition, does the sentence mean "They returned from where they had come"? Is that a correct definition?
I'd appreciate your help.
Being a teacher, she likes children.
AND
Having been a teacher, she likes children.
What is the difference between these two?
A friend of mine and I are planning a game together to work on in our free time. It's not an extensive game, but it's not a simple one either.
He's working on the story behind the game while I'm working on the graphics and code.
I don't really know where to start with the game. We know what the basic type of game it's going to be and how it would be played, but I'm having a hard time of actually knowing where to begin.
I have Xcode open but I don't really even know what I should be designing first.
What is some advice for this writer's block? Where is a good place to start with a game? Should I design all the graphics and layout before even touching Xcode? Should I program the things I know I'll have difficulty with first before getting to the easy stuff?
Answer
Start by getting something up and playable. Don't spend more than an hour on graphics, just render the game with rubbish placeholders. Games don't have objective requirements to build to: the key requirement is that it be fun, and evaluating whether a game is fun or not requires playing it; as a result, game design should follow an iterative approach. (I recommend Tom Wujec on the marshmallow challenge as an orientation on iterative design.
Here is a link of Sean Murray talking about the game No Man's Sky:
https://www.youtube.com/watch?v=h-kifCYToAU
Starting at around 4:00 in the video, he is talking about how the environment is procedurally generated.
At first, I thought it meant that they ran some incredibly complex algorithms to generate the entire universe and then stored it.
But as he explains, the world doesn't really exist as stored 3d data and what have you, but it really is just the output of a very complex function that takes your position (3d coordinates, and I guess time as well) as input and always generates the exact same environment around you based on that, no matter where you are in this gigantic universe.
This is incredibly smart and interesting but there are a few things that I don't understand:
How can you interact with the environment and have any effect on it if it is the result of a deterministic function? You would have to "update" that function every time you interact, don't you?
How can multiple players interact with the environment and see the same changes?
How can only a single player blow out a piece of rock and then expect it to stay blown apart? Does it change the "world-generating function" ?
Answer
How can you interact with the environment and have any effect on it if it is the result of a deterministic function? You would have to "update" that function every time you interact, don't you?
It's a simple enough concept to create any unmodified point in space / event in time / combination of these using a fixed function. The downside is that when any player modifies the procedurally-generated world at a given place / point in time, you have to store (the results of) this change -- potentially across clients if a multi-client environment. Because of custom modifications, the function alone is now no longer sufficient to produce that given point in space / time... instead you must first generate it from that function, and then apply any deltas that players have created at that point in time/space.
I suspect that anyone who was able to change this simple but apparently logically-immutable fact could make a lot of money indeed and the knock-on effects of this would run quite outside of just developing virtual worlds, as this idea runs deep into the heart of both compression and cryptography.
How can multiple players interact with the environment and see the same changes?
The deltas must be sent. The function alone cannot produce player-made changes on another player's machine, so the only way to get that information across is to send it.
How can only a single player blow out a piece of rock and then expect it to stay blown apart? Does it change the "world-generating function"?
No (as noted above).
deterministic or not?
Procedural generation should, by default, be considered to be an inherently deterministic concept as without being so, you can see how the whole approach falls apart.
I am trying to write a Pixelshader for a curve effect in Direct2d.
A curve effect maps each color channel value to a different value by using a look up table.
For this effect I would need to pass 3 arrays to the effect. Each array has 256 entries to map the specific color channel.
How can I pass these arrays to a Pixel shader (i.e. Direct2d Effect)?
Answer
You can pass arrays as 1D textures (which usually are 2D textures with height set to 1). Or 2D, if you need to store more than 2048/4096/8192 items - depending on graphics card. A 1D texture look-up is done by dividing array index by array size and then aligned to pixel centers with the required offsets. A 2D texture look-up is about the same, the only difference being texture coordinate calculation from array index - it's something like this with 2D textures:
float2 texcoord = ( float2( array_index % texture_width, array_index / texture_height ) + texel_offset ) / float2( texture_width, texture_height );
Maximum time I face this problem by saying listening music. Is there any traditional cause behind this?
Can we use listen music or listening music?
Seems there is only a slight difference, but why is listen to music the right way to say it?
Answer
Listen is an intransitive verb: it does not take a direct object. It means “be attentive to sounds”. Consequently, these are complete sentences.
I listen.
Listen! ... (This is a command, with the subject you understood.)
If you want to indicate that you are attentive to a particular kind of sound you must express this with a prepositional phrase.
I listen to music. ... Music is playing and I attend to it.
Listen for the bell. ... The bell will ring; be attentive, so you notice when that happens.
I knew that we use the verb "divide
" with "into"
preposition when we want to say to be divided into parts:
He divided a pear into two parts.
But, why in here we can also say (as I see here in Oxford):
He divided a pear in two.
Answer
As it says here, except for such phrases as divide in half and divide in two, the preposition into is used because divide emphasizes separating, breaking up or cutting up a whole into sections or parts, changing the state or form of something. When half and two are used as adjectives, the correct phrasing is divide into.
I want a function, which lets me follow a vector with a bit offset or delay, like it's done in the TrailRenderer. I tried to achieve this with the Lerp function, but that didn't work for me like I want, since I had to wait for a specific time until the lerp was done and then updated my targetVector of the Lerp again, which led to a bad result.
Now a possible solution to my problem would be a List, which holds all Positions of my Vector from the past 50 frames or so and I assign the same values to my other Vector with a specific offset. But I was wondering if there is some built in method in Unity, which could achieve this for me.
Edit: This is about moving a Canvas
Answer
What you describe, in terms of following a position's exact path with a time delay, might be both more complicated and less satisfying than what you want.
Here's a mock-up of two possibilities:
The object labeled "Path" follows the exact path of the leader object, a constant amount of time behind it. This means:
When the leader starts moving, there is a delay before the follower moves. Likewise for stopping.
Any kinks or jitter in the leader's movement is repeated exactly by the follower.
The object labeled "Blend" mixes-together its current position with the leader's each frame, so that it attracts toward it asymptotically.
The follower always moves directly toward the leader, cutting corners tighter and smoothing out jitter to a degree.
The follower moves slower when it's close to the leader, and faster when far away, making its follow distance/delay non-constant (but for some applications this gives a pleasant ease-in and ease-out to its motion effectively "for free")
The great thing about the blend approach is how simple the code is:
public class BlendFollower : MonoBehaviour {
public Transform leader;
public float followSharpness = 0.05f;
void LateUpdate () {
transform.position += (leader.position - transform.position) * followSharpness;
}
}
I use this about 80% of the time when I want some quantity to track toward another in games - it works for floats, positions, colours, rotations, etc. It's a super useful little trick.
Edit: for rotation, you'd use something like this:
transform.rotation = Quaternion.Lerp(
transform.rotation,
leader.rotation,
followSharpness);
Just ensure the Quaternions you start with are valid (not all zeroes or NaNs) or this will blow up. I often write a Quaternion.IsValid()
extension method to check this if I'm working with a lot of computed rotations.
Note that since this is blending a constant amount per frame, so at higher framerates it will be sharper and at lower framerates, spongier. For purely visual effects this is often tolerable, but if gameplay outcomes depend on the follow rate then you'll either want to correct for this or move it to FixedUpdate
so it's consistent.
Compare this to the path follow:
(Variable framerates account for some of this complexity - if you want to follow a constant number of frames behind, rather than seconds, then the code is a bit simpler. Also, I threw in my path-drawing debug gizmo code too.)
public class PathFollower : MonoBehaviour {
const int MAX_FPS = 60;
public Transform leader;
public float lagSeconds = 0.5f;
Vector3[] _positionBuffer;
float[] _timeBuffer;
int _oldestIndex;
int _newestIndex;
// Use this for initialization
void Start () {
int bufferLength = Mathf.CeilToInt(lagSeconds * MAX_FPS);
_positionBuffer = new Vector3[bufferLength];
_timeBuffer = new float[bufferLength];
_positionBuffer[0] = _positionBuffer[1] = leader.position;
_timeBuffer[0] = _timeBuffer[1] = Time.time;
_oldestIndex = 0;
_newestIndex = 1;
}
void LateUpdate () {
// Insert newest position into our cache.
// If the cache is full, overwrite the latest sample.
int newIndex = (_newestIndex + 1) % _positionBuffer.Length;
if (newIndex != _oldestIndex)
_newestIndex = newIndex;
_positionBuffer[_newestIndex] = leader.position;
_timeBuffer[_newestIndex] = Time.time;
// Skip ahead in the buffer to the segment containing our target time.
float targetTime = Time.time - lagSeconds;
int nextIndex;
while (_timeBuffer[nextIndex = (_oldestIndex + 1) % _timeBuffer.Length] < targetTime)
_oldestIndex = nextIndex;
// Interpolate between the two samples on either side of our target time.
float span = _timeBuffer[nextIndex] - _timeBuffer[_oldestIndex];
float progress = 0f;
if(span > 0f)
{
progress = (targetTime - _timeBuffer[_oldestIndex]) / span;
}
transform.position = Vector3.Lerp(_positionBuffer[_oldestIndex], _positionBuffer[nextIndex], progress);
}
void OnDrawGizmos()
{
if (_positionBuffer == null || _positionBuffer.Length == 0)
return;
Gizmos.color = Color.grey;
Vector3 oldPosition = _positionBuffer[_oldestIndex];
int next;
for(int i = _oldestIndex; i != _newestIndex; i = next)
{
next = (i + 1) % _positionBuffer.Length;
Vector3 newPosition = _positionBuffer[next];
Gizmos.DrawLine(oldPosition, newPosition);
oldPosition = newPosition;
}
}
}
I'd better get a quart. (daum.net)
There’s a had better usage in the above. I’m not trying to figure out what the original it would have been, but can this construction below be made? (When I, the main verb, is logically the object of the non-finite verb - get, it could be thought as a tough movement. But it's not the case.)
I’d be better to get a quart.
Answer
Did you mean to change it to a question, or to ask if the following is sensible?
I'd be better to get a quart.
If you said that to me I could respond by making a quizzical face and say:
"I'd be better to get a quart?"
(...as a challenge to the fact that I didn't understand what you just said, because it sounds weird.)
Yet it could pass for old-timey pirate language, as a sort of short-hand for "I'd be better off if I were to get a quart."
first mate: "Cap'N, would you like me fetch ye a gallon of skunk whiskey?"
cap'n: "Arrr, nay! I'd be better to get a quart of yonder Basil Hayden."
If you spoke like that people would know what you mean (and that you were a pirate). But it's not normal speech, and you should go with "I'd better get a quart."...assuming you live in a place where people know what quarts are.
Closed until Monday means, it will open at Monday.
But I contract with them until Dec 31st means the contract is valid at 31st...
Why is that?
Also, regarding Closed until Dec 31st:
Is this option not as clear as using the day of the week?
Would it be clearer to say, 'Closing until end of Dec 31st'? (If I want to include that date)
Would using the day of the week after make it more clear than using just the date?
If you look at this Link you'll see UNet is deprecated and will be removed in the future.
I'm developing a mobile multiplayer game, i've red official advise about it but they say "You can publish it with 2018.4 (LTS)" and I can't use 2018.4 version because in my project i use LWRP which is not in the 2018.X versions.
I've found the alpha version of the new system right here
My questions are:
1- Should i use the alpha version of the new system? And can i publish my game with it?
2- Should i wait for the new system came out? (I can't wait for months)
3- Should i try to use other third party systems like Photon?
Or is there any way to publish my game with nice multiplayer system?
Note: I can't pay for multiplayer matchmaking system. Because of that i want to make Hosting multiplayer game.
Answer
sure why not Photon2(PUN) is currently the number one platform used by professional studios and developers. As Unet being deprecated by Unity Photon is claiming to be the number one platform to create real-time multiplayer games. Recently Photon changed the implementation in Photon2 with better and optimized and well-structured code.
I think you should use the free version of Photon2(PUN), You can use it very easily and also it is Fast, reliable and scalable.
check these links:
https://www.photonengine.com/ https://www.youtube.com/watch?v=rF2JGhv3Pyo
and also you can learn how to use it with:
https://paladinstudios.com/2014/05/08/how-to-create-an-online-multiplayer-game-with-photon-unity-networking/ https://www.youtube.com/watch?v=evrth262vfs
...you know the ones. Something like "value overlays" or "value pop-ups" describes them quite well, but I was thinking they might have a real name?
Answer
"Floating text" does it for me. That's what I've always called it, although I've never used it specifically for indicating damage done. My usage was usually to indicate where something important in the scene was (e.g. "retrieve this data disk").
I have come across the need to use a scripting engine for my C++ game, but after experimenting with many languages since the last few days, nothing has truly stood out as the obvious choice for a language and/or binding library.
I would like have the ability to
Lua looks nice, but I'm unsure of its ability to do 1 and 3. Python and Boost::Python are great, but the same applies. Regardless, here's an example of an ideal implementation.
C++
class Game
{
...
void printNumber(int arg);
};
int main( )
{
Game *game = new Game();
// Make method available to script
functionRegister("printNumber", game->printNumber);
// Test and call function in script
if ( functionExists("greet") )
functionCall("greet", "Hello world");
}
Script pseudocode
function greet( greeting )
print( greeting )
game.printNumber( 42 )
end
The output may be
Hello world
42
Is there a language and library that will pull this off?
Answer
I'm fairly sure Lua can do everything you need relatively simply. I use Lua and C++ in my game. I looked at various wrappers like LuaBind, or using a generator like Swig, but I decided I didn't want any of that stuff and I wrote my own wrapper which I ended up making open source in case other people found it useful.
Using my little library you can do stuff like this in C++:
// Wrap a C++ function
static int Widget_AddChild(lua_State* L)
{
// Typesafe way to extract userdata from Lua
Widget* parent = luaW_check(L, 1);
Widget* child = luaW_check(L, 2);
lua_pushboolean(L, parent->AddChild(child));
}
static luaL_reg Widget_Metatable[] =
{
// Register the function you wrote above
{ "AddChild", Widget_AddChild },
// Widgets also have some getter and setter functions,
// Using these templates you can automatically generate
// wrapper functions for them
{ "GetStyle", luaU_get },
{ "SetStyle", luaU_set },
{ "Style", luaU_getset },
{ NULL, NULL }
};
int luaopen_Widget(lua_State* L)
{
luaW_register(L, "Widget", NULL, Widget_Metatable);
return 1;
}
In lua you could now do something like this:
local w = Widget.new()
w.foo = 10 -- Foo is stored on w, it's not accessible to other instances
-- You can also add values accessible to all Widgets
function Widget.metatable:newfunction()
print(self:GetWidth())
end
w:newfunction()
With all that, you can fairly easily call C++ function from Lua (and vice versa, but I didn't illustrate that here). Once you know how to interact with the Lua API, writing a function to 'safely' call (i.e. call a function only if it exists) a Lua function should be easy as well.
Edit:
If you want to check for the existence of a function before calling it, like I said, just check that the value is a function before calling it.
lua_getglobal(L, "myFunc"); // You can get your function from anywhere, this is just for example
if (lua_isfunction(L, -1)) // Check the top of the stack, make sure it's a function
{
lua_call(L, 0, 0);
}
As far as I understand, in American English there must stand that instead of which in the sentence
"Of these two birds the male is that which is colored brighter"
the clause being restrictive. On the other hand, "that that" is definitely not an option. Does it mean that the sentence is impossible in formal writing and must be reworded?
Answer
There's nothing wrong with that which here.
You are mistaken in your belief that that must be employed with restrictive relative clauses: both that and wh- relatives may be used in this context.
The idea of employing only that with restrictive relatives was first advanced in 1851, at a time when grammar-writers were inclined to rationalize the language. It was given wide currency by the Fowler brothers' The King's English, which argued that "[I]f we are to be at the expense of maintaining two different relatives, we may as well give each of them definite work to do", and by the elder Fowler's even more influential Modern English Usage. It was subsequently adopted by some fairly reputable style guides.
But it is not a rule in any register, formal or informal. Some people follow it, others do not; and even those who follow it acknowledge many situations where it not only may be suspended but must be. Fowler himself acknowledged that "[I]t would be idle to pretend that it is the practice either of most or of the best writers."
Sentence
A feasibility survey has now been completed in India to establish a network to felicitate contacts between small and medium enterprises.
What is the right form?
Between small and medium enterprises.
or
Among small and medium enterprises.
Answer
There is a difference in meaning, so the “right” form depends on what you want to say.
Between indicate that something happens involving (at least) two specific (types) of entities.
A feasibility survey has now been completed in India to establish a network to felicitate contacts between small and medium enterprises.
This sentence means that contacts are facilitated between small enterprises and medium enterprises.
While it is possible that this is the intended meaning, I doubt it. It would make sense if one would wrote something like:
We are trying to facilitate better contacts between citizens and the government.
Usually, the expression small and medium enterprises is used to refer to all those enterprises that are not considered “big”. So we are talking about one group of enterprises.
Among is used to indicate interaction between members of one specific group(*). So it is very likely that the actually intended meaning of the sentence is indeed:
A feasibility survey has now been completed in India to establish a network to felicitate contacts among small and medium enterprises.
This would mean that enterprises in the groups of small and medium enterprises form contacts with other enterprises in the same group. To my mind, this makes more sense than the version with between.
Another simple example of this use, to contrast it with the contacts between citizens and the government:
We also try to facilitate contacts among citizens.
I'm trying to use this this tutorial in monogame, but the effect does not work, only renders a black screen. Everyhing by itself renders just fine, I looked at the render targets if they get drawn to, but when I try to apply the effect to them it only renders a black screen.
I converted the .fx file using 2MGFX, also tried loading it with the following code:
BinaryReader Reader = new BinaryReader(File.Open(@"Content\\Lighting.mgfx", FileMode.Open));
lightingEffect = new Effect(GraphicsDevice, Reader.ReadBytes((int)Reader.BaseStream.Length));
also with
lightingEffect = content.Load("Lighting.mgfx");
and also with
byte[] bytecode = File.ReadAllBytes("Content\\Lighting.mgfx");
lightingEffect = new Effect(graphics.GraphicsDevice, bytecode);
They did not make a difference.
The .fx file contains the following:
sampler s0;
texture lightMask;
sampler lightSampler = sampler_state { Texture = ; };
float4 PixelShaderFunction(float2 coords: TEXCOORD0) : COLOR0
{
float4 color = tex2D(s0, coords);
float4 lightColor = tex2D(lightSampler, coords);
return color * lightColor;
}
technique Technique1
{
pass Pass1
{
PixelShader = compile ps_4_0_level_9_1 PixelShaderFunction();
}
}
And the monogame version of the game1.cs(Only the important stuff):
public class Game1 : Game
{
GraphicsDeviceManager graphics;
SpriteBatch spriteBatch;
Texture2D lightMask;
Texture2D SquareGuy;
RenderTarget2D lightsTarget;
RenderTarget2D mainTarget;
Effect lightingEffect;
public Game1()
{
graphics = new GraphicsDeviceManager(this);
Content.RootDirectory = "Content";
IsMouseVisible = true;
}
protected override void LoadContent()
{
spriteBatch = new SpriteBatch(GraphicsDevice);
lightMask = Content.Load("lightmask");
SquareGuy = Content.Load("SquareGuy");
// lightingEffect = content.Load("Lighting.mgfx");
/*
byte[] bytecode = File.ReadAllBytes("Content\\Lighting.mgfx");
lightingEffect = new Effect(graphics.GraphicsDevice, bytecode);
*/
BinaryReader Reader = new BinaryReader(File.Open(@"Content\\Lighting.mgfx", FileMode.Open));
lightingEffect = new Effect(GraphicsDevice, Reader.ReadBytes((int)Reader.BaseStream.Length));
var pp = GraphicsDevice.PresentationParameters;
// pp.BackBufferHeight = 1024;
// pp.BackBufferWidth = 1024;
lightsTarget = new RenderTarget2D(
GraphicsDevice, pp.BackBufferWidth, pp.BackBufferHeight);
mainTarget = new RenderTarget2D(
GraphicsDevice, pp.BackBufferWidth, pp.BackBufferHeight);
}
protected override void Draw(GameTime gameTime)
{
GraphicsDevice.Clear(Color.CornflowerBlue);
// TODO: Add your drawing code here
GraphicsDevice.SetRenderTarget(lightsTarget);
GraphicsDevice.Clear(Color.Black);
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Additive);
spriteBatch.Draw(lightMask, new Vector2(0, 0), Color.White);
spriteBatch.Draw(lightMask, new Vector2(100, 0), Color.White);
spriteBatch.Draw(lightMask, new Vector2(200, 200), Color.White);
spriteBatch.Draw(lightMask, new Vector2(300, 300), Color.White);
spriteBatch.Draw(lightMask, new Vector2(500, 200), Color.White);
spriteBatch.End();
// Draw the main scene to the Render Target
GraphicsDevice.SetRenderTarget(mainTarget);
GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin();
spriteBatch.Draw(SquareGuy, new Vector2(100, 0), Color.White);
spriteBatch.Draw(SquareGuy, new Vector2(250, 250), Color.White);
spriteBatch.Draw(SquareGuy, new Vector2(550, 225), Color.White);
spriteBatch.End();
// Draw the main scene with a pixel
GraphicsDevice.SetRenderTarget(null);
GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend);
lightingEffect.Parameters["lightMask"].SetValue(lightsTarget);
lightingEffect.CurrentTechnique.Passes[0].Apply();
spriteBatch.Draw(mainTarget, Vector2.Zero, Color.White);
spriteBatch.End();
base.Draw(gameTime);
}
}
So the question is what do I need to change to make the effect work in monogame?
I tried it in XNA with this code and worked perfectly. I know almost nothing about HLSL, so I don't know if theres something wrong with the effect. I only changed the version thingy from ps_2_0
to ps_4_0_level_9_1
. If I change it to ps_4_0_level_9_3
it still doesn't work. I read something about assigning textures from outside of the .fx here, but since I know so little about shaders I don't know if this is the problem.
Answer
I had this same problem last week. You need to add all the parameters to your PixelShaderFunction, or monogame will map registers incorrectly. Change your PixelShaderFunction parameters to be like this:
PixelShaderFunction(float4 pos : SV_POSITION, float4 color1 : COLOR0, float2 coords: TEXCOORD0)
Here is a more detailed explanation, where I got the answer: http://www.software7.com/blog/pitfalls-when-developing-hlsl-shader/
I'm trying to write some collision detection code.
ATM i have the code properly resolving Ellipsoid v. Ellipsoid collisions, and Box v. Box collisions, but Ellipsoid v. Box collisions doesnt work.
In general how do you test for interesection between these two.
What i've been doing is:
Finding the closest point on the ellipsoid to the box, and vice versa.
vector3 closestVertexOnBox = // Closest vertex on the box to the ellipsoid;
vector3 closestPointOnEllipsoid = // Closest vertex on the ellipsoid to the box
In this case ( ellipsoid v. box ) you have 4 axes.
Put them all in an array
vector3[] axes = { normalize( closestVertexOnBox - ellipsoid.origin ), unitX, unitY, unitZ }
Then for each axis in that array, Project the vector from the center of each shape, to its vertex closest to the other shape; onto each axis to find the projected half widths.
float minOverlap = infinity
vector3 overlapAxis;
for each axis in axes
{
float prjBoxHalfWidth = Dot ( axis , closestVertexOnBox - box.origin )
float prjEllipsoidHalfWidth = Dot ( axis , closestPointOnEllipsoid - ellipsoid.origin )
If the sum of any of the projected half widths are smaller than the projected distance between the two, then their is no collision
float prjDistance = Dot( axis , box.origin - ellipsoid.origin )
if(prjBoxHalfWidth + prjEllipsoidHalfWidth < prjDistance) return // no collision
Now get the axis with the least overlap out of all of them
float overlap = prjBoxHalfWidth + prjEllipsoidHalfWidth - prjDistance
if(overlap < minOverlap) {
minOverlap = overlap;
overlapAxis = axis;
}
Then resolve the overlap
ellipsoid.origin += overlapAxis * minOverlap/2
box.origin -= overlapAxis * minOverlap/2
Thats pretty much my ( pseduofied ) code. If thats good, then i might have an error, but thats wrong, then what am i not taking into consideration or forgetting.
Answer
How do you store the ellipsoid?
If it has a position, orientation and the radius in local x- and y-axis, it might be easier to calculate the inverse transformation matrix, that transforms the ellipsoid to a circle. Transform the AABB into that space too using that matrix, then you can make simple triangle-circle collision tests.
Edit: In the left image you see the original ellipsoid with the axis aligned box. In the right image both objects are transformed by the ellipsoid's inverse matrix, so the ellipsoid becomes a circle and the AABB becomes 2 triangles. Now its only a matter of 2 triangle-circle tests.
Edit
Problem solved (see Drackir's answer). Here's a demo of what I was trying to achieve with this joint. More info about the scenario on this other question.
Problem
I'm trying to create a very specific type of joint in Farseer that behaves like this:
Here's a picture:
If there's no way to achieve this with Farseer out of the box, how would I go about extending it to create a new type of join that works like this? Basically I want every body to behave like a "clone" of the others, but with a fixed offset between them.
Answer
Ok, after about two hours of tinkering I managed to do this but it requires adding some extra bodies. You'll probably want to extract this stuff into a method/class but the basic idea is this:
Create what I call "holder" bodies for your objects. These bodies share the size and position of your "objects" but don't participate in collisions. Essentially, they are clones.
//Create "Main" (center) body
_bodyMain = BodyFactory.CreateRectangle(World, 5f, 5f, 1f);
_bodyMain.BodyType = BodyType.Dynamic;
_bodyMain.Position = new Vector2(2, 2);
//Create "MainHolder"
_bodyMainHolder = BodyFactory.CreateRectangle(World, 5f, 5f, 1f);
_bodyMainHolder.BodyType = BodyType.Dynamic;
_bodyMainHolder.CollisionCategories = Category.None; //Prevents collisions
_bodyMainHolder.Position = new Vector2(2, 2);
_bodyMainHolder.IsSensor = true;
//See http://farseerphysics.codeplex.com/discussions/222524 for why they're sensors
_bodyMainHolder.FixedRotation = true;
//Note: Only add FixedRotation for the main one, leave it off of the other
// holders. I'll explain later.
Attach each holder to the "mainHolder" using a WeldJoint
. This will prevent them from moving away from one another. The FixedRotation
on the mainHolder prevents them from rotating. I don't know if there's a bug in farseer but if you add a WeldJoint
and both bodies have FixedRotation == true
, the joint doesn't work properly. This is why only the main holder has .FixedRotation = true
.
//Weld the holders together so they don't move. Assumes another holder is defined
// for the object on the right side (bodyRightHolder).
JointFactory.CreateWeldJoint(World, _bodyMainHolder, _bodyRightHolder,
_bodyRightHolder.Position - _bodyMainHolder.Position, Vector2.Zero);
Attach your "object" bodies to their respective holders using RevoluteJoint
s. This locks your objects to the holders but allows them to rotate freely.
//Lock the actual bodies to the holders
JointFactory.CreateRevoluteJoint(World, _bodyMainHolder, _bodyMain, Vector2.Zero);
JointFactory.CreateRevoluteJoint(World, _bodyRightHolder, _bodyRight, Vector2.Zero);
Make your real objects rotations match each other by attaching an angle joint. Note: you don't have to attach an angle joint to each and every pair of objects. Just add an angle joint to the main body and the other body and the rotations will translate across all objects.
//Make them rotate the same as each other
JointFactory.CreateAngleJoint(World, _bodyMain, _bodyRight);
That's it! Just add the extra code for each other object/object holder you want and it will handle the rest. Here's an image to illustrate my test:
Here you will see the green bodies are the holders. They do not rotate or collide and are welded together. The yellow and red bodies are your "objects" (yellow is the main). You can see that they are rotated by the same amount and are rotating around their respective holders. Also, only the red and yellow bodies participate in collisions. I believe this meets all three of your conditions above.
Working Example
If you load up the Farseer "Samples XNA" solution and find SimpleDemo1.cs ("Samples XNA" project > "Samples" folder), I rewrote it (code here) to test.
Hope this helps. Let me know if you have any questions.
I'm developing a simple, 2D physics system to complement an entity/component game object framework. So far, I have implemented some basic, tutorial-level physics. An entity that is affected by physics must have two components:
The physics engine currently uses Verlet integration to move entities - that is, velocity is derived from the current and previous positions of an entity, and is not explicitly stated anywhere.
I would now like to start implementing some joints, starting with the basics and perhaps expanding as I grow more familiar with the concepts.
The first joint I attempted to implement was extremely simple - the fixed joint, whereby two entities are 'fixed' together and their transforms may not change relative to each other. My approach was to make one entity an immovable child of the other - that is, to set the transform of A as a local transform relative to B and disable movement of A by passing all accumulated forces of the rigidbody of A to the parent, B. Already this seems hackish and inflexible and has issues with gravity (B ends up with two gravity forces acting on it) - I'm clearly heading in the wrong direction.
I have searched for some literature on the subject but have only found either very basic tutorials that only cover what I've already done, or articles with advanced mathematical formulae that are difficult to follow or relate to simulation in any meaningful way.
This leads to two questions:
Answer
This article ("Advanced Character Physics" by Thomas Jakobsen, with a PDF mirror here that preserves images) discusses solving fixed distance constraints (which sound to me like your fixed joints) between particles by relaxation -- specifically you want the section "Solving several concurrent constraints by relaxation" on page 2, I think -- treating the constraints as infinitely stiff springs. I found this article approachable enough years ago when I was implementing something similar, so hopefully it will have what you need.
A relevant passage:
One may think of this process as inserting infinitely stiff springs between the particle and the penetration surface – springs that are exactly so strong and suitably damped that instantly they will attain their rest length zero. We now extend the experiment to model a stick of length 100. We do this by setting up two individual particles (with positions x1 and x2) and then require them to be a distance of 100 apart. Expressed mathematically, we get the following bilateral (equality) constraint: |x2 - x1| = 100.
Although the particles might be correctly placed initially, after one integration step the separation distance between them might have become invalid. In order to obtain the correct distance once again, we move the particles by projecting them onto the set of solutions described by [the above equality constraint]. This is done by pushing the particles directly away from each other or by pulling them closer together (depending on whether the erroneous distance is too small or too large).
This page appears to have source code of a concrete example, although I'm not sure of its quality.
I am trying to put some text on a Model
and I want it to be dynamic. Did some research and came up with drawing the text on the texture and then set it on the model. I use something like this:
public static Texture2D SpriteFontTextToTexture(SpriteFont font, string text, Color backgroundColor, Color textColor)
{
Size = font.MeasureString(text);
RenderTarget2D renderTarget = new RenderTarget2D(GraphicsDevice, (int)Size.X, (int)Size.Y);
GraphicsDevice.SetRenderTarget(renderTarget);
GraphicsDevice.Clear(Color.Transparent);
Spritbatch.Begin();
//have to redo the ColorTexture
Spritbatch.Draw(ColorTexture.Create(GraphicsDevice, 1024, 1024, backgroundColor), Vector2.Zero, Color.White);
Spritbatch.DrawString(font, text, Vector2.Zero, textColor);
Spritbatch.End();
GraphicsDevice.SetRenderTarget(null);
return renderTarget;
}
When I was working with primitives and not models everything worked fine because I set the texture exactly where I wanted but with the model (RoundedRect 3D button). It now looks like that:
Is there a way to have the text centered only on one side?
Answer
You need to modify your model so that the UV (texture) coordinates place the texture at the correct location.
It's possible that setting the texture address mode to clamp may (sort-of) solve your issue. But this also depends on your model having the correct UV coordinates to make it work.
GraphicsDevice.SamplerStates[0] = SamplerState.LinearClamp;
(The default mode is wrap, which will cause texture coordinates outside the range of 0..1 to wrap around when addressing the texture. Clamp will duplicate the edge pixels outwards, outside that range.)
In the phrase below...
How to improve your spoken English
is the word "spoken" an adjective? Can the word "spoken" usually take the form of an adjective? I see this construction very often, but when I looked it up in the Cambridge dictionary I got that it is just a verb.
Answer
Yes, this spoken is an adjective.
In my opinion, Macmillan Dictionary is friendlier to learners; for example, you can look up the word spoken on their website, and you will find that it's clearly defined as adjective, with the definition: "spoken language is things that people say, not things that they write".
For more information related to using -ed and -ing verb forms as adjectives, I'd like to recommend reading Participial Adjectives @ The Internet Grammar of English.
Which is more resource-efficient (given a typical modern 3D game scene):
The idea is that anti-aliasing is a really expensive operation and avoiding it makes things faster. Is this the case?
I haven't had the opportunity to try it, since I work almost exclusively with 2D games.
Answer
The second option - drawing at a higher resolution than the target and downsampling to the target resolution - is known as supersampling and is considered a form of AA; if you read about this topic you'll see it referred to as SSAA.
It will almost certainly be slower than turning on other AA techniques built into modern games, such as MSAA (multisampled antialiasing, which is usually what people mean when they just say "AA" without any other qualifiers) or postprocessing effects like FXAA or MLAA. All these techniques were developed precisely to provide a faster alternative to SSAA.
For example, if you're doing 4xSSAA (let's say), you're drawing an image twice the width and height of the final image, so that you have 4 samples per pixel of the final image. Then, whenever you run a pixel shader on some object, you're running it 4 times for each final image pixel touched by that object. In contrast, with 4xMSAA you'll only run the pixel shader once for each final image pixel touched, and replicate that result to all 4 samples. This allows you to greatly reduce the amount of shader work while still getting the effects of antialiasing along geometry edges, which is where it matters most. This is typically much faster than SSAA. Modern GPUs also have compression schemes to reduce the memory bandwidth used for MSAA, which helps performance further.
And if you turn on a postprocessing AA method it'll be even faster (although the results aren't quite as good), as it is simply rendering the frame normally, at the target resolution, then going over it afterward with a shader that detects and repairs jaggies.
So, you can expect that the AA methods available in typical games these days are far faster than simply rendering at a higher resolution and downsampling.
Are the words "toward" and "towards" synonymous? If not, when should I use one over the other? "Towards" usually sounds silly to my ear, but is that just me?
Answer
The -ward root in words like forward, backward or toward is related to the Latin vertere and versus (to turn) and goes as far back as Sanskrit (vartate).
So this root has a rich history and has appeared in various altered forms in numerous Indo-European languages.
In German we have wärts which has the s: rückwärts (backward(s)) and vorwärts (forward(s)).
Evidently, in Old English the -ward root was either -weard or -weardes. So even in ancient times, there were already two forms: one with an es and one without. The idea that Brits use -wards, whereas -ward is a modern Americanism simply does not hold water since both versions trace back to respective Old English forms.
In any case, there is no need to have any qualms about putting the s on -ward or about leaving it off.
OpenGL contexts before and after OpenGL 3.0 are rather different. So far I've really only worked with buffers on either side anyway, I do know the most notable difference is lack of Immediate Mode.
Throwing out Immediate Mode considerations all together, what important differences should I look out for specifically when coding low-level two dimensional operations in a 2-D graphics engine?
Answer
In terms of the contexts specifically, there's little difference. Most OpenGL implementations have most of the features of OpenGL 3.0+ even when using a legacy context, due to the way OpenGL extensions work.
If you're specifically asking about what features in OpenGL 3.0 are worth using, some of the best ones are geometry shaders and instancing, both of which are useful even for 2D graphics in certain circumstances. However, in most cases for simple 2D, all you're going to be doing is filling up a streaming vertex buffer every frame and making a single draw call, so there's really very little extra you're going to be doing.
In terms of 2D in OpenGL in general, just make sure you have a texture atlas (sprite sheet) so you very rarely need to change texture states. You want to avoid doing a draw call per sprite, as that is incredibly inefficient, and rather you want to batch together a lot of sprites and draw them all at once. Render with painter's algorithm, where "render" means pushing the geometry into a vertex buffer, and draw it at the end. Post-process as you see fit.
Sometimes I get confused by articles, especially when it comes to definite articles.
My question is: How do I use the definite article "the" between two nouns?
Should I repeat "the" with each noun?
For example:
Do the earth and Moon orbit the sun?
or
Do the earth and the moon orbit the sun?
Answer
Dropping the second definite article in that sentence represents a form of ellipsis. Briefly, ellipsis is
the omission, from a clause, of one or more words that are nevertheless understood in the context of the remaining elements.
So your sentence may be stated fully or with one or more elements removed—it doesn't matter which as long as the resulting statement is easily understood. Note, however, that use of ellipsis is a shade less formal than making a complete statement. That doesn't make it bad, however, and oftentimes even in formal prose the elliptical statement may be preferred because it seems more natural than a "complete" one.
Let's say I got an object A
and object B
in a 2D game. I create a vector leading from A
to B
. It's name is AB
.
How can I make A
move along the vector AB
and reach B
?
One way I was thinking of doing this, is calculate the angle between AB
and the x axis, and then move the object every game-loop cycle in that angle, using trigonometry.
I would calculate that angle by making a new normalized vector (1,0) (the x axis), normalize AB
, and then get the angle between them by getting their dot product and using arccos
on it.
But is there an easier way to make an object follow the path of a vector?
EDIT:
In this question: Make objects follow a strict path (Xna), the way someone suggested to move an object along a vector, is like so:
position += direction * speed * elapsed;
Where:
position
= current position of the object.direction
= a normalized vector pointing in the direction of the destination.speed
= a scalar to decide how much to advance the object every cycle of the game-loop. (Is this a 'scalar'? Am I using this word correctly?)elapsed
= what is this?I get everything but elapsed
. What is this? Is this necessary?
Anyhow, is this method a good method? Would you recommend it?
Thanks
I was registering a user account online at a sports club and got an Error Prompt saying "Can we get a spot"? See my picture below.
I know "spot" has the meaning of dirt, location, but cannot figure out what is the meaning of the sentence in the context. Can anybody help?
Answer
In a gym, to "spot" for someone means to help them with heavy weights - either to get them into a starting position, or to be ready in case the weight slips so that they don't hurt themselves.
From wiktionary, definition 12:
(gymnastics, dance, weightlifting) One who spots (supports or assists a maneuver,
or is prepared to assist if safety dictates); a spotter.
In this case, they are making a joke with their 404 page. "Can we get a spot" means "Something went wrong and we need help".
Which of the following sentences are correct?
I think the second and the fourth ones are correct. But I'm not sure if the others are wrong.
Answer
In formal use all of these require were for the condition (IF) clause, because they are all "counterfactual", conditions "contrary to fact": you are not him, you are not in her place.
You have correctly employed "I'd" (= I would) as counterfactuals in the consequence (THEN) clauses.
The tense in a subordinate clause is not necessarily determined by the tense of a main clause, so the "want" and "have" clauses may take either the present or the past form: depending on circumstances, the "facts" (having money later and wanting another animal) may be regarded as either doubtful or certain.
But in colloquial use all are acceptable.
Are the two example questions correct or is there a rule that applies when using and not using contraction words?
How can I make a hole in an object with Unity 3D? I've something like this object and I want to make a hole in it.
Answer
You can't. Unity does not have CSG modeling options. You're going to either:
I'm a flash actionscript game developer who is a bit backward with mathematics, though I find physics both interesting and cool.
For reference this is a similar game to the one I'm making: Untangled flash game
I have made this untangled game almost to full completion of logic. But, when two lines intersect, I need those intersected or 'tangled' lines to show a different color; red.
It would be really kind of you people if you could suggest an algorithm for detecting line segment collisions. I'm basically a person who likes to think 'visually' than 'arithmetically' :)
Edit: I'd like to add a few diagrams to make convey the idea more clearly
P.S I'm trying to make a function as
private function isIntersecting(A:Point, B:Point, C:Point, D:Point):Boolean
Thanks in advance.
Answer
I use the following method which is pretty much just an implementation of this algorithm. It's in C# but translating it to ActionScript should be trivial.
bool IsIntersecting(Point a, Point b, Point c, Point d)
{
float denominator = ((b.X - a.X) * (d.Y - c.Y)) - ((b.Y - a.Y) * (d.X - c.X));
float numerator1 = ((a.Y - c.Y) * (d.X - c.X)) - ((a.X - c.X) * (d.Y - c.Y));
float numerator2 = ((a.Y - c.Y) * (b.X - a.X)) - ((a.X - c.X) * (b.Y - a.Y));
// Detect coincident lines (has a problem, read below)
if (denominator == 0) return numerator1 == 0 && numerator2 == 0;
float r = numerator1 / denominator;
float s = numerator2 / denominator;
return (r >= 0 && r <= 1) && (s >= 0 && s <= 1);
}
There's a subtle problem with the algorithm though, which is the case in which two lines are coincident but don't overlap. The algorithm still returns an intersectioin in that case. If you care about that case, I believe this answer on stackoverflow has a more complex version that addresses it.
Edit
I did not get a result from this algorithm, sorry !
That's strange, I've tested it and it's working for me except for that single case I described above. Using the exact same version I posted above I got these results when I took it for a test drive:
I'm a web developer and I am keen to start writing my own games.
For familiarity, I've chosen JavaScript and canvas
element for now.
I want to generate some terrain like that in Scorched Earth.
My first attempt made me realise I couldn't just randomise the y
value; there had to be some sanity in the peaks and troughs.
I have Googled around a bit, but either I can't find something simple enough for me or I am using the wrong keywords.
Can you please show me what sort of algorithm I would use to generate something in the example, keeping in mind that I am completely new to games programming (since making Breakout in 2003 with Visual Basic anyway)?
Answer
The midpoint displacement algorithm is exactly what you want.
That link can generate something like this:
Or like your image, depending on what parameters you use. There's C source available here.
Do both imply the same meaning?
I can feel that in time needs a for or a to but am not sure. Like in I'll be there in time for the show to start. or I'll be there in time to see it. (on time won't work)
Answer
On time = punctual
in time = timely with respect to something which will happen or a situation that may arise.
The meeting starts at 10AM. Please be on time. Don't be late.
The train left at noon. I arrived at 11:45, but not in time to get a seat. The seats were all taken, and I had to stand in the aisle.
If we place the order now, the office furniture should arrive in time for the new employees.
I want an object to rotate after a delay. I tried using StartCoroutine
, yield
and WaitForSeconds
, but failed. How can I do it?
Answer
You could create a variable for example called timer
of type float and make it equal to the amount of seconds you would like to wait, then in the Update()
function subtract Time.deltaTime
from your timer
variable. Afterwards use if statement to check if the timer
variable is equal to zero, if so rotate the object.
Can you tell me which form of the following sentences is the correct one please? Imagine two friends discussing the gym... I was in a good s...