Wolfenstein 3d vs doom11/7/2022 I think it is an overdrive, but it's not the "co-processor" variant. Due to the slight pauses being intentional, I can't even tell you if they will move faster or slower than they should, it depends on which frame you spend more tics on.) (The easiest way to notice this is the fake hitler fireballs which have a typo which makes them only work correctly when adaptive tics are used, but technically the speed variation happens on a much smaller scale for other actors. This has interesting side effects when paired with the adaptive tics system in that even the speed and aggressiveness of enemies is indeterminate, although it's hard to notice. On the contrary, in Wolfenstein 3D AI routines are called almost every tic (except for in those brief pauses each "step"). I've never even seen a 486 do so, but that's probably because we never had a gaming graphics card.) (Seriously, people often forget that most computers didn't obtain max frame rate on these games when they came out. However the time spent processing the actor list is probably better spent on the renderer if typical computers weren't going to get 35fps any way. Besides going through the list of actors to decrement the frame duration, the only thing it would do every tic is player movement. Increasing Doom to 70Hz wouldn't have significantly increased line of sight checks as the AI routines aren't called every tic. I will note that there are a large number of differences in how the Wolf3D engine and Doom engine work internally beyond rendering techniques. On the bright side they don't get much slower by increasing it, either. You could of course reduce the number of pixels on-screen by 1/3rd or more, by shrinking the view window, but the benefits from shrinking it are far from linear, as a significant part of the renderer's CPU time actually is used in the BSP calcs, and those do not get any faster with lowering the resolution (at least not significantly). That's over a tripling in rendering speed required in order to achieve double framerate. So, for simplicity's sake, if a tic was 1 second, what you could render in 0.7 seconds before, now you have to render it in just 0.2 seconds, since non-rendering time will still be 0.3 seconds in either case. Since non-rendering calcs would remain constant (assuming you didn't optimize them at all), they now would be 60% of the halved CPU time, leaving you 40% for rendering. In order to get double the framerate, you really needed to reduce the overall CPU time to 50% of the current one. You were lucky to get 1 fps in some spots.īut even assuming that on a given system non-rendering CPU time was on average 30% and that you could barely get a constant 35 fps on all of them most of the time, that left you with 70% of the CPU time being rendering. "A lot" may mean 5000-10000 today on NUTS.WAD or something, but on a 486 it might have meant as little as 200 active monsters on the same map, without even necessarily seeing them, as early "nuts-like" mods like DMINATOR.WAD prove. For one, the percentage of CPU time spent on parts other than than actually drawing pixels to the screen is not negligible, but may about to up to 50% of total CPU time or even more, especially in maps with a lot of actors.
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |