Ah, the Microsoft tradition of always having the wrong priorities.
I wouldn’t be too hard on Microsoft. The requirement to curate public package repositories only emerged somewhat recently, as demonstrated by the likes of npm, and putting in place a process to audit and pull out offending packages might not be straight-forward.
I think the main take on this is to learn the lesson that it is not safe to install random software you come across online. Is this lesson new, though?
Agile is not a system. It’s a set of principles, set by the Agile manifesto.
The Agile manifesto boils down to a set of priorities that aren’t even set as absolutes.
I strongly recommend you read upon Agile before blaming things you don’t like on things you don’t understand .
ccache folder size started becoming huge. And it just didn’t speed up the project builds, I don’t remember the details of why.
That’s highly unusual, and suggests you misconfigured your project to actually not cache your builds, and instead it just gathered precompiled binaries that it could not reuse due to being misconfigured.
When I tried it I was working on a 100+ devs C++ project, 3/4M LOC, about as big as they come.
That’s not necessarily a problem. I worked on C++ projects which were the similar size and ccache just worked. It has more to do with how you’re project is set, and misconfigurations.
Compilation of everything from scratch was an hour at the end.
That fits my usecase as well. End-to-end builds took slightly longer than 1h, but after onboarding ccache the same end-to-end builds would take less than 2 minutes. Incremental builds were virtually instant.
Switching to lld was a huge win, as well as going from 12 to compilation 24 threads.
That’s perfectly fine. Ccache acts before linking, and naturally being able to run more parallel tasks can indeed help, regardless of ccache being in place.
Surprisingly, ccache works even better in this scenario. With ccache, the bottleneck of any build task switches from the CPU/Memory to IO. This had the nice trait that it was now possible to overcommit the number of jobs as the processor was no longer being maxed out. In my case it was possible to run around 40% more build jobs than physical threads to get a CPU utilization rate above 80%.
I was a linux dev there, the pch’s worked, (…)
I dare say ccache was not caching what it could due to precompiled headers. If you really want those, you need to configure ccache to tolerate them. Nevertheless it’s a tad pointless to have pch in a project for performance reasons when you can have a proper compiler cache.
Also interesting, successful software projects don’t just finish and die. They keep on going and adapt changes and implement new features. If we have a successful project that goes on for a decade but we have a clusterfuck of a project which blows up each year for the same time period, by this metric you’ll have only a 10% success rate.
If you write it down it is documentation.
I think you’re not getting the point.
It matters nothing if you write down something. For a project, only the requirements specification matters. The system requirements specification document lists exactly what you need to deliver and under which conditions. It matters nothing if you write a README.md or post something in a random wiki.
Requirements are not the same thing as specifications either, but both are documentation!
https://en.wikipedia.org/wiki/System_requirements_specification
that managers want to stay in control of everything, and they decide whether they do it or not.
That’s fine, it’s a call from the manager.
That doesn’t make it Agile’s fault though. In fact, one of the key principles of Agile is providing developers with the support they need. Blaming Agile for the manager single-handledly pushing for something in spite of any feedback does not have any basis.
So you started with the need to authenticate, which should be documented in the requirements. You know, the things that are required to happen.
I think you’re confusing documentation with specification.
Requirements are specified. They are the goals and the conditions in which they are met. Documentation just means paper trails on how things were designed and are expected to work.
Requirements drive the project. Documentation always lag behind the project.
the whole point of agile is to be short term
Not really. The whole point of Agile is to iterate. This means short development cycles which include review and design rounds to adapt to changes that can and will surface throughout the project. The whole point of Agile is to eliminate problems caused by business, project, and technical goals not changing because planning is rigid and can’t accommodate any changes because the process does not have room for those.
This is why this whole “things need to be planned” crowd are simply talking out of ignorance. Agile requires global planning, but on top of this supports design reviews along the way to be able to face changing needs. This requires planning in short-medium-long terms.
Don’t blame Agile for your inability to plan. No one forces you not to plan ahead.
The primary problem is using agile all the time instead of when it is actually intended to be used: short term work that needs to be done quickly by a small team that are all on the same page already.
I think you got it entirely backwards.
The whole point of Agile is being able to avoid the “big design up front” approach that kills so many projects, and instead go through multiple design and implementation rounds to adapt your work to the end goal based on what lessons you’re picking up along the way.
The whole point is improving the ability to deliver within long term projects. Hence the need to iterate and to adapt. None of these issues pose a challenge in short term work.
Note that this is failure to deliver on time, not failure to deliver full stop.
It’s also important to note that the Hallmark of non-Agile teams is de-scoping and under-delivering. It’s easy to deliver something on time if you switch your delivery goals and remove/half-bake features to technically meet requirements while not meeting requirements.
On all the agile projects I’ve worked on, the teams have been very reluctant to make a specification in place before starting development.
I don’t think this is an Agile thing, at all. I mean, look at what Agile’s main trait: multiple iterations with acceptance testing and product&design reviews. At each iteration there is planning. At each planning session you review/create tickets tracking goals and tasks. This makes it abundantly clear that Agile is based in your ability to plan for the long term but break/adapt progress into multiple short-term plans.
I’ve been working with Agile for years and I worked with people who burned out, but there was not even a single case where Agile contributed to burning out, directly or indirectly. In fact, Agile contributed to unload pressure off developers and prevent people from overworking and burning out.
The main factors in burning out we’re always time ranges from the enforcement of unrealistic schedules and poor managerial/team culture. It’s not Agile’s fault that your manager wants a feature out in half the time while looming dismissals over your head.
It’s not Agile’s fault that stack ranking developers results in hostile team environments where team members don’t help out people and even go as far as putting roadblocks elsewhere so that they aren’t the ones in the critical path. Agile explicitly provides the tools to make each one of these burnout-inducing scenarios as non-issues.
std::unordered_map is one of the worst ones
It should be noted that the benchmark focused on the std::unordered_map
implementation from GCC 13.2.0. I’m not sure if/how this conclusion can be extended to other implementations such as msvc or clang.
It baffles me that you can advertise something as “unlimited” and then impose arbitrary limits after the fact.
I didn’t saw anything on the post that suggests that was the case. They start with a reference to a urgent call for a meeting from cloud flare to discuss specifics on how they were using the hosting provider’s service, which sounds a lot like they were caught hiding behind the host doing abusive things,and afterwards they were explicitly pointed out for doing abusing stuff that violated terms of service and jeopardized the hosting service’s reputation as a good actor.
First communication, because they clearly were confused about what was happening and felt like they didn’t have anyone technical explain it to them and it felt like a sales pitch.
I don’t think that was the case.
The substack post is a one-sided and very partial account, and one that doesn’t pass the smell test. They use an awful lot of weasel worlds and leave about whole accounts on what has been discussed with cloud flare in meetings summoned with a matter of urgency.
Occam’s razor suggests they were intentionally involved in multiple layers of abuse, were told to stop it, ignored all warnings, and once the consequences hit they decided to launch a public attack on their hosting providers.
it’s about deploying multiple versions of software to development and production environments.
What do you think a package is used for? I mean, what do you think “delivery” in “continuous delivery” means, and what’s it’s relationship with the deployment stage?
Again, a cursory search for the topic would stop you from wasting time trying to reinvent the wheel.
https://wiki.debian.org/DebianAlternatives
Deviam packages support pre and post install scripts. You can also bundle a systemd service with your Deb packages. You can install multiple alternatives of the same package and have Debian switch between them seemlessly. All this is already available by default for over a decade.
I feel this sort of endeavour is just a poorly researches attempt at reinventing the wheel. Packaging formats such as Debian’s .DEB format consist basically of the directory tree structure to be deployed archived with Zip along with a couple of metadata files. It’s not rocket science. In contrast, these tricks sound like overcomplicated hacks.
Logging in local time is fine as long as the offset is marked.
I get your point, but that’s just UTC with extra steps. I feel that there’s no valid justification for using two entries instead of just one.
I’ve had mixed results with ccache myself, ending up not using it.
Which problems did you experienced?
Compilation times are much less of a problem for me than they were before, because of the increases in processor power and number of threads.
To each its own, but with C++ projects the only way to not stumble upon lengthy build times is by only working with trivial projects. Incremental builds help blunt the pain but that only goes so far.
This together with pchs (…)
This might be the reason ccache only went so far in your projects. Precompiled headers either prevent ccache from working, or require additional tweaks to get around them.
https://ccache.dev/manual/4.9.1.html#_precompiled_headers
Also noteworthy, msvc doesn’t play well with ccache. Details are fuzzy, but I think msvc supports building multiple source files with a single invocation, which prevents ccache to map an input to an output object file.
Running JavaScript everywhere is looming as one of the biggest screwups in InfoSec. What do userscript extensions like Grease monkey teach us?