Loading...
  OR  Zero-K Name:    Password:   

Realtime Extreme 3D Graphics

13 posts, 1093 views
Post comment
Filter:    Player:  
sort



A long time ago i posted a thread regarding Extreme detailed graphics that needs low resources. Found out they were right and here is the evidence:

Go to EUCLIDEON Video nr.1 released in 2014

Go to EUCLIDEON Video nr.2 released in 2014

Go to EUCLIDEON Video Released in 2011

So what do you think about it:)?




Euclideon main site
+0 / -0
Its not "low resources" it just a method to directly scan real world instead of imitating it. The you have this model and put it into the game. However its not really "low resources either" oh boy - this room alone is a lot of polygons there... and everybody knows that normals are the cheaper version of small polygons.
Its faster but is it more efficient considering hardware? Absolutely not. (This can be evidenced in 2nd video as the framerate there is just horrendous!
That being said while I dont think it would benefit a game industry it could be still very useful as educational software. Kids in school could watch historical buildings wich they would propably never see in their life (yes even considering that tourism is ever increasing).
+0 / -0
From what i remember in the video they explain its more efficient. Look at the last video

They are repeating this idea in this video to: "a normal laptop"
Go to Youtube video
+0 / -0
9 years ago
If I want to sell you a product I wont tell you the faulty things abaut said product instead I will praise it as I ever could.
+1 / -0
9 years ago
Watch the video, there is no reason for them to lie...i think.
+0 / -0
9 years ago
There is no reason to lie? They sell the technology that is behind these videos. "please contact us" "if you want a scan of historical object we will do this for free"
I watched all 3 videos and what I saw was definetly a photographic representation of real world but then... it didn't look that really that realistic. Propably because of the lightning. Anyhow what I also noticed was really bad framerate as you can quite easly distinguish frames in the last video. Considering that I don't think it is "low resources" at all.
+1 / -0
Skasi
quote:
here is the evidence

Here we go again. ROrankForever that's not evidence. I can put a video on youtube showing stutter-free ZK on highest settings with 1500 units moving simultaneously displaying "9001 FPS" in the Esc-menu. So what?
+3 / -0
But maybe Euclideon did have some interesting technique they didn't want to share.

I say this because they seem to have the product to show for, like the virtual cathedral or the street view.

Originally they don't have any product, just the demo island, then they have the Geospatial viewer, then the virtual cathedral.

But its all kind of crude looking, if they really had special method they should share it with giant company like Google. Imagine a faster Google Earth, maybe with a flyby mode that integrate street view seamlessly.
+0 / -0
From some, almost basic knowledge of computing, one can suspect that they are full of it. And also quite some experts say they doubt it. I see little reason to believe this snake oil. Also IIRC all of their videos were static geometry only (though I even doubt they can even pull that).

EDIT: also, comments disabled, that is always encouraging :)
+1 / -0
What a shitty buildup lol. You could immediately tell that those "real video clips" were absolutely not real. They could've at least not made the camera move in a perfect line, but the lighting still looked flat and unnatural. Edges look improper, and there's still little things it can't reproduce (like how almost ever structured surface has marginal amounts of glistering when you move). In particular, you can't reproduce marble, wet objects or metal with this approach. It will always look too, well, static during any movement.

Their main selling argument is how they can produce such detail with limited computer memory. It's not hard to see how, if they did manage to find a proper tree-based data structure that allows finding the point for a screen pixel in log(N), their claims of "unlimited" stuff will hold (as in, storage will be the limit). I'm still skeptical on disk access times though, pulling 1920x1080 pages from disk every frame is not going to work smoothly unless you can somehow cache stuff. Read: Small geometries (that largely fit into cache, like the cathedral) or repeating geometries (like what we saw years ago).

Quote from their website (actually the first thing you'll see going to their home page): "ACCESS ALL OF YOUR DATA IN LESS THAN A SECOND".
If "within at most a second" is what they want to use for real-time applications... Well you can see where this is going. And yes, the link refers you to using massive point cloud data on a normal PC, not some cloud storage solution.

Lol, another hilarious quote from their website: "By creating an ultra-efficient search algorithm – similar to the way that Google can instantly search the vast amounts of data on the internet – Bruce was able to fundamentally change the way that 3D data was handled by computers"
...what? Google can instantly search the entire internet? Right...
If he wants to compare his solution to the massive amount of infrastructure Google's datacenters are using... sure.
+1 / -0

9 years ago
quote:
what? Google can instantly search the entire internet?

Didn't you know? They also have a backup on a floppy disk.
+0 / -0
9 years ago
Nonsense PLrankAdminSprung
Everybody knows that Chuck Norris has internet on his floppy.
+0 / -0
9 years ago
OK, sorry for some thread necromancy here but I have an irresitable urge to comment on this:

The initial stuff that the Euclideon guys showed was just plain embarrassing. Their tech is stuff that was researched heavily around the turn of the century. There is nothing magic to it. It's called point based rendering (not to be confused with voxel rendering - voxels are not points!). There are some seemingly nice properties to point based methods You can put them in a hierarchy and compress the hell out of that. Google "QSplat" on how you could render millions of points of a high resolution laser scan in real time with that - basically on the CPU, in 2002 or so.

However nice that may sound, it unfortunately has drawbacks. Those which I remember from the top of my head are:

- antialiasing without stupid supersampling or multisampling required a lot of hacks and stil didn't work quite right; however, I'm not aware of anyone retrying with modern postprocessing AA approaches - these might help
- you can't efficiently animate the compressed point clouds

But, if they managed to do proper and efficient streaming for rendering static, colored point sets, I'll be impressed. They would have a solid market there with people who need to work on high resolution laser scans and other nasty things. And that's what they seem to be aiming for now. This is much more reasonable.

I just don't expect this stuff to appear in games.
+5 / -0