• 0 Posts
  • 11 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle



  • Yes ECS is probably the most popular scalable DOD programming pattern, aside from Compute Shaders. If correctly used, ECS store your data in a way that makes access more cache friendly. There are multiple flavours of ECS with some which are better for small components and access and others which are tuned for insert and delete.

    One thing I would say if you want to switch to ECS is to start with a simple performance test of say 100, 10 000 and 1 million entities being updated in a loop. Do this with and without ECS. This way you can keep track of performance and have actual numbers instead of trusting the magic of ECS. ECS can have some overhead and aren’t always the best choice and if you use them wrong they won’t be as good.

    I haven’t tried Bevy yet but it looks very promising!


  • Not a silly question at all!

    Compilers are already really smart and do a lot of heavy lifting but they’re also restricted to what you write and they err on the side of safety. They will do things like inline object functions if you don’t have virtual functions and are simple enough which reduces the number of indirections. They won’t re-order your classes and re-write your code. In my experience compilers don’t do a good job at magically vectoring code (using SIMD registers to their fullest extent), so maybe that can be improved by a super smart compiler.

    I would say it’s possible to have a linter let you know if you’re making structs which are cache unfriendly.

    There are also runtime tools like Intel’s Vtune or perf on Linux. I would say that while those tools are very powerful the learning curve is very difficult. In my experience you need to know a lot about optimization to understand the results.

    Today’s generative AI can give you broad strokes about refactoring some code to DOD and I’m sure in a few years it could do something to whole projects.

    Oftentimes safety comes at the cost of performance with compilers if you don’t give it enough details such as restrict/noalias, packing, alignment, noexcept, assume/unreachable, memory barriers. Rust is able to be performant and safe because it is a very verbose and restrictive language when you write it. C++ gives you all the tools but they tend to be off by default. In my experience game devs like to stick to C++ despite the lack of safety guardrails because it’s faster to write efficient code and “we’re not making medical equipment” sentiments.


  • If you want your code to be performant you need to think about how you lay out your data for your CPU to manipulate it. This case might work well for one player but what if you have 100, 10 000?

    When you call player->move (assuming polymorphism), you’re doing three indirections: get the player data at the address of player, get the virtual function table of that player, get the address of the move function.

    Each indirection is going to be a cache miss. A cache miss means your cpu is going to be waiting for the memory controller to provide the data. While the cpu can hide some of this latency with pipelining and speculative execution, there are two problems: the memory layout limits how much it can do and the memory fetch is still orders of magnitude slower than cpu instructions.

    If you think that’s bad, it gets worse. You now have the address of the function and can now move your player. Your cpu does a few floating point operations on 3d or 4d vectors using SIMD instructions. Great! But did you know that those SIMD registers can be 512 bits wide? For a 4d vector, that’s 25% occupancy, meaning you could be running 4x as fast.

    In games, especially for movement, you should be ditching object oriented design (arrays of structs) and use data oriented design (struct of arrays).

    Don’t do

    struct Player { float x, float y, float rotation, vec3 color, Sprite* head};
    Player players[NUM];
    

    Instead do

    struct Players {
        Vec2 positions[NUM];
        float rotations[NUM];
        vec4 colors[NUM];
        Sprites heads[NUM];
    };
    

    You will have to write your code differently and rethink your abstractions but your CPU will thank you for it: Less indirections, operations will happen on data on the same cache lines, operations will be vectorizable by your compiler and even instruction cache will be optimized.

    Edit 1: formatting

    Edit 2: just saw you’re doing 2d instead of 3d. This means your occupancy is 12.5%. That operation could be 8 times as fast! Even faster without indirection and by optimizing cache data locality.




  • The same can be said about transporting and storing hydrogen. You can’t just use existing infrastructure. Hydrogen has to be kept under high pressure and it leaks out of most containers since it’s the smallest element on the periodic table. Not to mention the energy density per volume (compressed) is much lower than gas.

    Making hydrogen through electrolysis is possible and we’ve all seen it in school but it is pretty inefficient if you compare storing energy in a lithium battery to making hydrogen from fresh water sources. Not to mention liquid hydrogen, after being generated and compressed, must be transported which uses huge amounts of energy. And even given that, it’s pointless to talk about green hydrogen when it’s less than 1% of global hydrogen production and even optimistic projections don’t show it growing that much in the following decade. It’s also old technology meaning there isn’t much room for improvement to the process, transportation and storage problems.

    Hydrogen production is dominated by the fossil fuel industry because it is much more cost effective to extract it from coal and natural gas. Something like 6% of use of these fossil fuels currently go to hydrogen production.

    I’m sorry where you live the power costs are so high. Hopefully things will improve with newer power infrastructure.