• 276 Posts
  • 443 Comments
Joined 11 months ago
cake
Cake day: July 29th, 2023

help-circle








  • ccache folder size started becoming huge. And it just didn’t speed up the project builds, I don’t remember the details of why.

    That’s highly unusual, and suggests you misconfigured your project to actually not cache your builds, and instead it just gathered precompiled binaries that it could not reuse due to being misconfigured.

    When I tried it I was working on a 100+ devs C++ project, 3/4M LOC, about as big as they come.

    That’s not necessarily a problem. I worked on C++ projects which were the similar size and ccache just worked. It has more to do with how you’re project is set, and misconfigurations.

    Compilation of everything from scratch was an hour at the end.

    That fits my usecase as well. End-to-end builds took slightly longer than 1h, but after onboarding ccache the same end-to-end builds would take less than 2 minutes. Incremental builds were virtually instant.

    Switching to lld was a huge win, as well as going from 12 to compilation 24 threads.

    That’s perfectly fine. Ccache acts before linking, and naturally being able to run more parallel tasks can indeed help, regardless of ccache being in place.

    Surprisingly, ccache works even better in this scenario. With ccache, the bottleneck of any build task switches from the CPU/Memory to IO. This had the nice trait that it was now possible to overcommit the number of jobs as the processor was no longer being maxed out. In my case it was possible to run around 40% more build jobs than physical threads to get a CPU utilization rate above 80%.

    I was a linux dev there, the pch’s worked, (…)

    I dare say ccache was not caching what it could due to precompiled headers. If you really want those, you need to configure ccache to tolerate them. Nevertheless it’s a tad pointless to have pch in a project for performance reasons when you can have a proper compiler cache.






  • the whole point of agile is to be short term

    Not really. The whole point of Agile is to iterate. This means short development cycles which include review and design rounds to adapt to changes that can and will surface throughout the project. The whole point of Agile is to eliminate problems caused by business, project, and technical goals not changing because planning is rigid and can’t accommodate any changes because the process does not have room for those.

    This is why this whole “things need to be planned” crowd are simply talking out of ignorance. Agile requires global planning, but on top of this supports design reviews along the way to be able to face changing needs. This requires planning in short-medium-long terms.

    Don’t blame Agile for your inability to plan. No one forces you not to plan ahead.


  • The primary problem is using agile all the time instead of when it is actually intended to be used: short term work that needs to be done quickly by a small team that are all on the same page already.

    I think you got it entirely backwards.

    The whole point of Agile is being able to avoid the “big design up front” approach that kills so many projects, and instead go through multiple design and implementation rounds to adapt your work to the end goal based on what lessons you’re picking up along the way.

    The whole point is improving the ability to deliver within long term projects. Hence the need to iterate and to adapt. None of these issues pose a challenge in short term work.



  • On all the agile projects I’ve worked on, the teams have been very reluctant to make a specification in place before starting development.

    I don’t think this is an Agile thing, at all. I mean, look at what Agile’s main trait: multiple iterations with acceptance testing and product&design reviews. At each iteration there is planning. At each planning session you review/create tickets tracking goals and tasks. This makes it abundantly clear that Agile is based in your ability to plan for the long term but break/adapt progress into multiple short-term plans.


  • I’ve been working with Agile for years and I worked with people who burned out, but there was not even a single case where Agile contributed to burning out, directly or indirectly. In fact, Agile contributed to unload pressure off developers and prevent people from overworking and burning out.

    The main factors in burning out we’re always time ranges from the enforcement of unrealistic schedules and poor managerial/team culture. It’s not Agile’s fault that your manager wants a feature out in half the time while looming dismissals over your head.

    It’s not Agile’s fault that stack ranking developers results in hostile team environments where team members don’t help out people and even go as far as putting roadblocks elsewhere so that they aren’t the ones in the critical path. Agile explicitly provides the tools to make each one of these burnout-inducing scenarios as non-issues.





  • It baffles me that you can advertise something as “unlimited” and then impose arbitrary limits after the fact.

    I didn’t saw anything on the post that suggests that was the case. They start with a reference to a urgent call for a meeting from cloud flare to discuss specifics on how they were using the hosting provider’s service, which sounds a lot like they were caught hiding behind the host doing abusive things,and afterwards they were explicitly pointed out for doing abusing stuff that violated terms of service and jeopardized the hosting service’s reputation as a good actor.


  • First communication, because they clearly were confused about what was happening and felt like they didn’t have anyone technical explain it to them and it felt like a sales pitch.

    I don’t think that was the case.

    The substack post is a one-sided and very partial account, and one that doesn’t pass the smell test. They use an awful lot of weasel worlds and leave about whole accounts on what has been discussed with cloud flare in meetings summoned with a matter of urgency.

    Occam’s razor suggests they were intentionally involved in multiple layers of abuse, were told to stop it, ignored all warnings, and once the consequences hit they decided to launch a public attack on their hosting providers.






  • it’s about deploying multiple versions of software to development and production environments.

    What do you think a package is used for? I mean, what do you think “delivery” in “continuous delivery” means, and what’s it’s relationship with the deployment stage?

    Again, a cursory search for the topic would stop you from wasting time trying to reinvent the wheel.

    https://wiki.debian.org/DebianAlternatives

    Deviam packages support pre and post install scripts. You can also bundle a systemd service with your Deb packages. You can install multiple alternatives of the same package and have Debian switch between them seemlessly. All this is already available by default for over a decade.






  • I’ve had mixed results with ccache myself, ending up not using it.

    Which problems did you experienced?

    Compilation times are much less of a problem for me than they were before, because of the increases in processor power and number of threads.

    To each its own, but with C++ projects the only way to not stumble upon lengthy build times is by only working with trivial projects. Incremental builds help blunt the pain but that only goes so far.

    This together with pchs (…)

    This might be the reason ccache only went so far in your projects. Precompiled headers either prevent ccache from working, or require additional tweaks to get around them.

    https://ccache.dev/manual/4.9.1.html#_precompiled_headers

    Also noteworthy, msvc doesn’t play well with ccache. Details are fuzzy, but I think msvc supports building multiple source files with a single invocation, which prevents ccache to map an input to an output object file.