Wikipedia:Reference desk/Archives/Computing/2015 October 28

= October 28 =

Best 3D environment for some geometry displays
Hello,

I am working on a project that involves displaying a large number of small spheres in a 3D environment, such that a user could use mouse + wasd keys to "fly" around in some intuitive fashion (Perhaps up = forward, down = back, left/right = strafe, mouse = camera direction). Here are the things i am looking for that would be best to aid in these tasks:

Criteria A - The development environment already has a basic structure for moving in the manner described, where all i would need to do is define locations of spheres in (x,y,z, radius) and some manner to color them etc.

Criteria B - Instancing. I've read a very small amount about this, but if i understand correctly, it means that many many copies of the same object take up very little space in memory due to the way they are handled.

Criteria C - The ability to use some sort of additional script/language to read a file with info on where the spheres ought to be located etc, and display them in a 3D environment that can be compiled to an exe/jar file (So that an exe/jar can be made representing a specific layout of spheres)

Criteria D - An environment that can manage to render many spheres with the least slowdown to a GPU. This may mean that there would be different approximations of spheres that "look pretty close to a sphere", losing a small amount of roundness in order to speed up rendering. Ideally, some function of spheres where i can set the quality within some range.

Criteria E - Good documentation and examples, example code.

I do know a decent amount of Java programming, and i realize that putting together things like a txt file reader that will run some code and compile an exe/jar environment will take time and effort on my part. I am not asking for a near complete solution, but for the best tools that will get me there. I prefer that any external language to use would be Java, but if i have to learn C or something else, i will.

Possible environments:

- http://sunflow.sourceforge.net/index.php?pg=feat - Sunflow - I see from this website that it is written in java. If it lets there be a "Camera" controllable by keyboard/mouse then i should be able to read in a file and make a jar fairly easily, once i learn the details of how to program the environment itself.

- https://www.blender.org/ - Blender - I am told this already has Criteria A and C, with something like python as the programming language.

- Unreal Engine 4

Please let me know if anything in my requirements is unclear. I just want some good help from perhaps semi-professional people that have experience working with 3D programming. Thanks in advance for all the help and guidance.

216.173.144.188 (talk) 13:20, 28 October 2015 (UTC)


 * "Strafe" confuses me. Do you mean "pan" right and left ?  As for whether each sphere is an instance or a copy, note that spheres can take up very little memory, just a center coord and a radius, and perhaps some texture, reflectivity, specularity, etc.  An exception is with a voxel model, but I doubt if you want that.  So, you should decide on whether you want instances of one sphere or separate copies based on whether you want any changes to affect them all or be able to change each independently. StuRat (talk) 16:33, 28 October 2015 (UTC)

StuRat: Think of strafing as walking left or right but keeping your head pointed in the forward direction (the camera does not rotate, it simply slides left or right). I think "panning" is the same idea, "Strafe" is a gaming term.

With regard to whether the spheres are instances, i guess it was just a matter of trying to get low GPU/CPU use out of potentially hundreds of thousands of displayed spheres (I havent tested this, so i may need to adjust my numbers drastically down if this is too much for a standard pc). It would seem from your response that this would more be something i need to implement with my programming. I do want to do different things with some spheres though, like have different radius and color eventually. I am more worried what software would help me most overall than instances. Could you address this concern, please?

(PS: Looking at voxels, this is sort of on my line of thinking, as i am using the spheres as sort of "3D Pixels". I dont know alot about voxels though and from the wiki article it seems they are square. Perhaps spheres will look nicer, but maybe squares will allow me to have higher quantities of objects on the screen without lag?)

216.173.144.188 (talk) 16:50, 28 October 2015 (UTC)


 * You shouldn't think of voxels as either cubes or spheres, but just 3D coords (points). When you want to display them, then your display method may model them in different ways.  If you have in mind putting many voxels close together to form solid objects, some shading and display methods will average them out, so they don't look lumpy.  Another alternative is the use of solid primitives (spheres, cubes, cylinders, etc.).  What exactly is the application you have in mind ?


 * Also, do you want perspective views, isometric views, etc. ? And do you need Z-clipping ?  (Even if you don't want it for display purposes, it may dramatically improve performance.)  StuRat (talk) 17:19, 28 October 2015 (UTC)

StuRat, I thank you for your expertise, you are making me think about aspects of this i did not see before! I will answer your question about my intent first, followed by technical info.

What i am wanting to do is generate 3D fractals using the Chaos Game. The 3D environment is so the user can view aspects of the fractal from different angles instead of just a fixed view projected to 2D. (However, getting a specific fixed 2D image with a certain camera angle to a png/jpg image file would also be nice!)

With this in mind.... Clipping may be useful if objects are hidden by others, out of camera view, or if they are so far away that they would only be rendered as less than 1 pixel in size. Perhaps we also need to clip very close objects that would look like a massive blob and obstruct the view of everything else! I would think perspective drawing is more real world than isometric, but im not sure. Wiki does say perspective is used in video games. Geometric primitives sound like a good way to go. I think we are on the same page somewhat as we are talking about optimizing speed of display. Let me know your reaction to my application. Thanks again for the help!

216.173.144.188 (talk) 17:35, 28 October 2015 (UTC)


 * So maybe create a Sierpinski Pyramid instead of a Sierpinski Triangle ? I can see that working well with a voxel model.  Updating hundreds of thousands of elements in real time does seem problematic, though, so we will need to radically simplify the display process.  Here's how I would approach it:


 * 1) Display the object initially aligned with the screen X, Y, and Z axes, such that each voxel's X and Y coords corresponds with one screen pixel's X and Y.


 * 2) The display method will start farthest behind the screen (negative Z coord) then render the object one plane at a time from the back to the front. (We've ensured that the spacing of voxels is such that they produce pixels with no gaps in-between, so that the solid portions really do look solid.  The planes in front will hence obscure the planes behind.)


 * 3) Use a slowly changing color as you move from back to front, to show depth. Should be darkest in back and brightest in front.


 * 4) Panning right/left & up/down, and Z-clipping can then be added. You could use arrow keys to control panning and some other keys to control the front and rear Z-clipping planes.


 * 5) Perspective views with changing camera angles, and zooming in and out, could possibly be added later, but beware that those will introduce new problems:


 * 5a) Jaggies. A nice straight line can look like crap if rotated.  (Lines that are actually at an angle will have jaggies, too, even if not rotated.  So, a "Sierpinski Cube" might look better than a Pyramid.  When you do have angles, 45 degrees looks better than, say 5 degrees.)


 * 5b) Gaps between pixels. If you zoom in or rotate in certain ways, you will see whatever is behind the front "wall".  You can somewhat compensate for this by adding more voxels, or displaying each voxel as more than a single pixel, but those slow things down and/or make the object look lumpier.


 * 5c) Once you rotate the object, the method of drawing the object from the back of the screen to the front will no longer be as simple as going from the most negative Z value to the most positive Z value, as screen Z and model Z will no longer correspond. It will be much more complex.


 * 6) To save on processing time, I would not implement ray tracing for shadows, reflections, light sources, etc. StuRat (talk) 18:19, 28 October 2015 (UTC)

StuRat:

Regarding a Sierpinski Pyrimid... yes, i could also render Platonic Solids, etc.... If "Hundreds of thousands" of spheres are way too many, i can go lower resolution and make perhaps ~2,000 spheres instead.

I am somewhat perplexed by your response. I will answer using the numbers you have given.

1, 2 - I'm not sure why this approach. These are 3D objects. The only benefit i see of this is rendering closer objects first, but it seems like you want to restrict the angles the camera can have, or make different angles more difficult. This does not suit my intent. (See response to 4, 5c).

3 - This makes sense to indicate depth, i might employ this method, having non illuminated spheres farther back.

6 - I agree, i am not trying to get full realism, and so i dont need silly things like shadows of the spheres on a "floor" or "wall", nor do i need reflections. Having that the spheres are illuminated and colored is sufficient.

5a, 5b - Actually, i INTEND for the fractal to be broken up. I do not wish to make it "solid", but to show individual spheres so the user can observe their density and distribution in different areas.

4, 5c - I'm not sure why we are talking like this. It seems you have the idea that i am programming this ENTIRE environment from scratch. I don't know how to do this, and in my original post i hope it's clear that i want to use some premade development tools (game engines? etc) to do some of this. Do you have suggestions for what tools i could use? This is what im looking for. Your advice regarding clipping etc is very helpful, but if we are looking at this as if i am writing a complete environment from scratch with no tools, this is the wrong approach.

I hope my responses have made things clear. Thanks again.

216.173.144.188 (talk) 18:55, 28 October 2015 (UTC)


 * I was trying to come up with a system that would allow you to render hundreds of thousands of elements in real time, which requires drastic simplification. Yes, you can use off-the-shelf rendering applications, but then you will instead need to radically reduce the number of elements to be rendered or accept long lags after every operation.  I'm not the best person to recommend specific software, so I will let others do that.  (I restated the Q below in case potential responders might assume I already answered your Q in this long thread.)  StuRat (talk) 19:04, 28 October 2015 (UTC)

Thanks StuRat, so if i understand you correctly we have 2 ways to do this: Your way, which lets there be a massive quantity of objects, at the sacrifice of having to program almost all of it, OR we have pre-designed tools approach, which you estimate will require that i use many less objects in order to not need drastic simplification. Your insight was really helpful, thanks again!

216.173.144.188 (talk) 20:51, 28 October 2015 (UTC)


 * You're welcome. There's also the programming time to consider.  I could make a GIF animation of the concepts I outlined above (leaving out #5) in about a week.  If you go to my home page, you can see other GIF animations I've created (at the bottom, click SHOW to see each). StuRat (talk) 21:17, 28 October 2015 (UTC)


 * There are tricks for rendering very large numbers of very simple objects that leverage the power of the GPU...specifically, if you can use the GPU to update the positions of the objects - then you can remove all of that stuff out of the slow-old CPU and into the massively parallel world of the GPU. Sadly, this is fairly advanced stuff - and probably not appropriate to a relative newbie to 3D graphics.  You might also want to consider whether you can treat this set of objects as a particle system - for which there are many well-understood optimisations that you could probably get from a Unity plugin of some kind. SteveBaker (talk) 23:23, 30 October 2015 (UTC)

Specific 3D rendering software recommendation ?
Per the above, the OP wants software that can render (ideally hundreds of thousands but perhaps as few as 2000) elements (spheres or cubes) in real time and allow for changing camera angles, panning, zoom, Z-clipping, etc. No ray-tracing needed. See their original post for addition req's. Any ideas ? StuRat (talk) 19:04, 28 October 2015 (UTC)


 * We have List_of_game_engines. I think many of them can do what OP wants, but many of them are not free. Unity_(game engine) is free for personal use. Blender_Game_Engine is free and open source, but it is still rapidly evolving and will almost certainly break self-compatibility sometime in the future according to their website . Of course these tools will prevent OP from having to make a complete engine, but my understanding is that even getting to a point where you can walk around a few spheres using Blender is a fairly serious time investment. SemanticMantis (talk) 20:10, 28 October 2015 (UTC)

Yes, sadly it seems like going from 2D pixels on a BufferedImage in Java to 3D environment is a MASSIVE step. I hope i can cope with it. I would have thought that just suspending some spheres in the air wouldn't be this hard, and was comparing it's simplicity to modern video games. It would seem its all more complex than i thought. Would there be any value in trying to find a professor at a local university that does this type of work in computer science? Is there any group of people that are interested in assisting with projects like this for the value of exploring mathematics? More thoughts on engines and tools is welcome!

216.173.144.188 (talk) 20:51, 28 October 2015 (UTC)


 * Step #1 is undoubtedly to get away from Java. Java really doesn't have a great future in interactive 3D.  For a relative beginner, I'd definitely recommend using Unity-3D.  It's free for personal use - and (with a watermark) for sharing with other people - and if your game ever gets popular enough that you want to start making money with it, the Unity license is really cheap and flexible.
 * If you don't want to rely on someone else's game engine, then start learning either:
 * C++ (or C#) and DirectX (windows)
 * C++ and OpenGL (linux, mac or windows)
 * JavaScript and WebGL (anything except Apple systems in a web browser)
 * Objective-C and OpenGLES for Apple tablets & phones.
 * Java and OpenGLES is a kind of option under Android - although most games aren't using Java (because it sucks), so back to C++...but I strongly recommend you don't!
 * Whichever of those routes you choose, expect to have a 6 to 12 month learning curve before you have a playable game.


 * Hence Unity.
 * SteveBaker (talk) 21:07, 28 October 2015 (UTC)

Thanks Steve, any thoughts on Blender vs Unity? Also, any rough estimate for time it may take to get a running program with Unity engine? I'm starting to feel a little discouraged, to be honest.

216.173.144.188 (talk) 22:05, 28 October 2015 (UTC)


 * I can't help on Blender vs. Unity, but it does seem that Unity is currently much more stable. Some other bits of advice and places you might get further help - As to looking for local professors - I think probably not helpful. Even the ones focused on 3D graphics will be submitting their work to places like SIGGRAPH, may not be terribly interested in using off-the-shelf game engines for fractal visualization. You might have some luck soliciting a CS grad student, if you can find one with similar interests. I'd think you'd have better luck seeing if there is a game developer club in the area. The local university may have listings for graphics enthusiasts groups. You could also hang out at reddit's gamedev area . There is technically an /r/indiegamedev, but it's private and you'll have to ask for membership. There's also /r/infographic, which has some content on data visualization. And there's also /r/Unity3D, which will probably be a good place to ask for specific help, and they also have a wiki and other tutorials. I have no idea how long it really takes to get something simple going in Unity, but the time invested will likely pay off, as then you'll have a nice skill set that can lead to many other fun projects, or perhaps even employment opportunities. SemanticMantis (talk) 22:10, 28 October 2015 (UTC)


 * Unity is much more mainstream than blender. The blender game engine has a checkered history and it's not widely used.  With unity, there are a bazillion plugins - and some categories of game can be written without writing a single line of code.  If you follow the first half dozen unity video tutorials, you'll be able to get SOMETHING working over a weekend.  But to get the actual game you want to write done will probably take you multiple weeks...assuming you have enough programming skills to write some simple control structures in C#.  You don't need to be a C# expert - I never learned the language but found it to be close enough to other languages a knew to pick up enough of it to be able to write a handful of plugins.


 * I definitely wouldn't recommend the blender game engine. SteveBaker (talk) 16:22, 29 October 2015 (UTC)

I have now looked at the very first Unity project tutorial, the rolling ball game, and i am very happy! Some of the things i need for MY project were explained there, i just need to adapt them! (Make Direction keys bind to the camera instead of a ball, find out how to get BOTH mouse and keyboard input, etc). Are some of the abilities StuRat talked about in close reach? Like if i want to do Z Clipping or clip things that aren't visible (Obstructed, or out of camera frame)? Thanks alot to everyone, this is helping a ton!

216.173.144.188 (talk) 23:27, 29 October 2015 (UTC)


 * As far as the software not wasting time trying to render objects which are off the screen or hidden, hopefully the software knows to do that automatically. StuRat (talk) 03:36, 30 October 2015 (UTC)


 * Yes, for sure - Unity handles all of the ugly 3D optimisation stuff completely automatically behind the scenes. If it's more efficient to detect that something doesn't need to be drawn, than to just go ahead and draw it anyway - then Unity does exactly that. Just place and move your objects through the scene and let Unity worry about all of that stuff. SteveBaker (talk) 14:20, 30 October 2015 (UTC)


 * Yes, that's a good point about it being difficult to detect. While objects off the sides, top or bottom of the screen are simple to identify (a little harder if only partially off the screen), not so for objects hidden behind others.  This basically requires ray-tracing to tell which are hidden and which aren't, which is a rather expensive operation.  This is why I would just draw it from the (screen) back to the front, so the front automatically hides whatever should be hidden. StuRat (talk) 19:15, 30 October 2015 (UTC)


 * That's kinda old-school. Back when graphical objects were in the 10 to 1000 triangle range, you couldn't afford to do much work in order to decide what to discart and what to keep...but with the level of detail that's expected in current games, it's not uncommon to have 100,000 triangle objects - and in that case, you can afford to do quite a lot of work to decide whether it's worth drawing it or not.


 * What's commonly done these days is to do things like generating 'proxy' geometry for complex objects (eg a bounding cuboid with no textures, lighting, etc). You draw the proxy objects, but turning off writes to the frame buffer and having the hardware report back how many pixels passed the Z-test. If the result is zero then it's safe to skip rendering the actual geometry.  Obviously you don't do this if you're rendering simple geometry - but if the object you're drawing is a 100,000 triangle monster - then this can be a big win.


 * We can actually get cleverer even than that - rendering the scene at low resolution with every object drawn in a different color at very low detail - then only rendering the objects whose colors appear in the image. We can even extrude the proxy geometry along the direction of motion to get an estimate of whether the object needs to be rendered NEXT frame.


 * There is a whole field of work relating to Potentially visible sets (PVS's) which provide a suite of tricks you can use.


 * You need a similar set of solutions for rendering shadow-casting objects from the point of view of the light-source...complicated by the issue of objects that are outside the field of view that cast shadows into the field of view.


 * There are a lot of MUCH fancier techniques for culling than simple field-of-view (FOV) and range/level-of-detail (LOD) culling.


 * That said, I don't know how many of those Unity uses natively - and which require plugins or programming that you'd need to do yourself...but as our OP describes this application, I'm pretty sure that FOV and LOD are sufficient. SteveBaker (talk) 23:19, 30 October 2015 (UTC)


 * Interesting, but it sounds like some of those shortcuts would only provide approximations of what is visible and what is hidden. StuRat (talk) 03:15, 1 November 2015 (UTC)


 * That is true of some of those techniques - but not others. The simplest option (draw bounding-box proxy geometry) is guaranteed to always draw everything that needs to be drawn - although it might quite often draw something that didn't need to be drawn.  Overall, it's likely to be a net win - although the effectiveness depends heavily on your ability to do rough near-to-far range sorting and/or to have domain-specific knowledge of the scene content.


 * So, for example, I had to draw a city scene with a few dozen very large buildings that didn't contain many triangles - and thousands of animated humans and vehicles moving around in the streets between those buildings. The humans and vehicles covered relatively few pixels - but demanded insanely complex geometry, lots of textures and really complex shaders.  In that case I could draw all of the buildings mindlessly - then render a plain-colored bounding cuboid for each human and each vehicle.  If none of the bounding box passed the Z-test, then I could skip rendering the detailed geometry - which was a huge win and was not an approximation.


 * In another situation, we had very fillrate-heavy particle systems being drawn behind large/complex foreground geometry - and we also had a very slow-moving camera. I rendered an over-large bounding volume for the particle systems and did the occlusion rendering at lower resolution AND lower frame rate than the regular graphics. That resulted in monumental fill rate improvements - but the results were (admittedly) only approximate.  So once in a while, the very edge of a particle system that SHOULD have been visible was discarded - but because these particles had very soft, fuzzy/translucent edges, nobody ever noticed a problem.


 * Sometimes, absolute rendering perfection is trumped by getting a higher frame rate or more scene complexity...and these tricks are used in many video games, visual simulation and augmented reality software. SteveBaker (talk) 05:17, 2 November 2015 (UTC)

How do I get a Blue Screen to stay on screen?
I know there is a setting somewhere (Windows 7) that will keep the Blue Screen on my screen so I can note the technical details, instead of it disappearing and the PC re-booting itself. Can someone remind me where I can do that please?

Thanks Gurumaister (talk) 16:15, 28 October 2015 (UTC)
 * My Computer > Properties > Advanced System Settings > Advanced > Startup and Recovery, and uncheck "Automatically restart" under "System failure". Tevildo (talk) 18:43, 28 October 2015 (UTC)

Thank you, Tevildo. Found it. 82.71.20.194 (talk) 19:50, 28 October 2015 (UTC)