As a developer I can freely admit that without the operations people the software I develop would not run anywhere but on my laptop.
I know as much about hardware as a cook knows about his stove and the plates the food is served on – more than the average person but waaaay less than the people producing and maintaining them.
As a devops manager that’s been both, it depends on the group. Ideally a devops group has a few former devs and a few former systems guys.
Honestly, the best devops teams have at least one guy that’s a liaison with IT who is primarily a systems guy but reports to both systems and devops. Why?
It gets you priority IT tickets and access while systems trusts him to do it right. He’s like the crux of every good devops team. He’s an IT hire paid for by the devops team budget as an offering in exchange for priority tickets.
I’ve done both, it’s just a rarity to have someone experienced enough in both to be able to cross the lines.
Those are your gems and they’ll stick around as long as you pay them decently.
Hard to find.
Because the problem is that you need
A developer
A systems guy
A social and great personality
The job is hard to hire for because those 3 in combo is rare. Many developers and systems guys have prickly personalities or specialise in their favourite part of it.
Devops spent have the option of prickly personalities because you have to deal with so many people outside your team that are prickly and that you have to sometimes give bad news to….
Eventually they’ll all be mad at you for SOMETHING…… and you have to let it slide. You have to take their anger and not take it personally…. That’s hard for most people, let alone tech workers that grew up idolising Linus torvalds, or Sheldon cooper and their “I’m so smart that I don’t need to be nice” attitudes.
Fantastic summary. For anyone wondering how to get really really valuable in IT, this is a great write-up of why my top paid people are my top paid people.
I’ve always found this weird. I think to be a good software developer it helps to know what’s happening under the hood when you take an action. It certainly helps when you want to optimize memory access for speed etc.
I genuinely do know both sides of the coin. But I do know that the majority of my fellow developers at work most certainly have no clue about how computers work under the hood, or networking for example.
I find it weird because, to be good at software development (and I don’t mean, following what the computer science methodology tells you, I mean having an idea of the best way to translate an idea into a logical solution that can be applied in any programming language, and most importantly how to optimize your solution, for example in terms of memory access etc) requires an understanding of the underlying systems. That if you write software that is sending or receiving network packets it certainly helps to understand how that works, at least to consider the best protocols to use.
. I think to be a good software developer it helps to know what’s happening under the hood when you take an action.
There’s so many layers of abstractions that it becomes impossible to know everything.
Years ago, I dedicated a lot of time understanding how bytes travel from a server into your router into your computer. Very low-level mastery.
That education is now trivia, because cloud servers, cloudflare, region points, edge-servers, company firewalls… All other barriers that add more and more layers of complexity that I don’t have direct access to but can affect the applications I build. And it continues to grow.
Add this to the pile of updates to computer languages, new design patterns to learn, operating system and environment updates…
This is why engineers live alone on a farm after they burn out.
It’s not feasible to understand everything under the hood anymore. What’s under the hood grows faster than you can pick it up.
I’d agree that there’s a lot more abstraction involved today. But, my main point isn’t that people should know everything. But knowing the base understanding of how perhaps even a basic microcontroller works would be helpful.
Where I work, people often come to me with weird problems, and the way I solve them is usually based in low level understanding of what’s really happening when the code runs.
yeah i wish it was a requirement that you’re nerdy enough to build your own computer or at least be able to install an OS before joining SWE industry. the non-nerds are too political and can’t figure out basic shit.
You should if you want to be a science writer or academic, which lets be honest is a better comparison here. If your job involves latin for names and descriptions then you probably should take at least a year or two of latin if you don’t want to make mistakes here and there out of ignorance.
I like informing yourself about the note taking app you’re writing with a little more. It makes it a bit more obvious that it’s kind of obvious but can have many advantages.
Personally though I don’t really see upside of building a computer as you could also just research things and not build it or vice versa. (Maybe it’s good for looking at bug reports?)
A 30 minute explanation on how CPUs work that I recently got to listen in on was likely more impactful on my C/assembly programming than building my own computer was.
you wouldn’t want somebody that hates animals to become a veterinarian just because of money-lust. the animals would suffer, the field as a whole, too. maybe they start buying up veterinary offices and squeeze the business for everything they can, resulting in worse outcomes- more animals dying and suffering, workers get shorted on benefits and pay.
people chasing money ruin things. we want an industry full of people that want to actually build things.
I think software was a lot easier to visualise in the past when we had fewer resources.
Stuff like memory becomes almost meaningless when you never really have to worry about it. 64,000 bytes was an amount that made sense to people. You could imagine chunks of it. 64 billion bytes is a nonsense number that people can’t even imagine.
When I was talking about memory, I was more thinking about how it is accessed. For example, exactly what actions are atomic, and what are not on a given architecture, these can cause unexpected interactions during multi-core work depending on byte alignment for example. Also considering how to make the most of your CPU cache. These kind of things.
and I don’t mean, following what the computer science methodology tells you, I mean having an idea of the best way to translate an idea into a logical solution that can be applied in any programming language,
It very much depends on how close to hardware they are.
If someone is working with C# or JavaScript, they are about as knowledgeable with hardware as someone working in Excel(I know this statement is tantamount to treason but as far as hardware is concerned it’s true
But if you are working with C or rust or god forbid drivers. You probably know more than the average IT professional you might even have helped correct hardware issues.
The devops team set up a pretty effective setup for our devops pipeline that allows us to scale infinity. Which would be great if we had infinite resources.
We’re hitting situations where the solution is to throw more hardware at it.
And IT cannot provision tech fast enough within budget for any of this. So devs are absolutely suffering right now.
I know this is not everyone and there’re some unicorns out there but after working with hiring managers for decades i can’t help but see cheap programmers when I see Devops. It’s ether Ops people that think they are programmers or programmers that are not good enough to get hired as Software Engineers outright at higher pay. It’s like when one person is both they can do both but not great at ether one. Devops works best when it’s a team of both dev and Ops working together. IMO
Our IT team would rather sit in a room with developers and solve those problems, than deal with hundreds of non-techs who struggle to add a chrome extension or make their printer icon show up.
I would love to work through issues but the stock of developers we currently have seem to either the rejects or have as someone else stated “a god complex”. They remind me of pilots in the military. All in all it is a loss for my organization.
As a developer I like to mess with everything. Currently we are doing an infrastructure migration and I had to do a lot of non-development stuff to make it happen.
Honesly I find it really usefull (but not necessary) to have some understanding of the underying processes of the code I’m working with.
I work on a team with mainly infrastructure and operations. As one of the only people writing code on the team. I have to appreciate what IT support does to keep everything moving. I don’t know why so many programmers have to get a chip on their shoulder.
Simple example, our NASes are EMC2. The devs over at the company that does the software say they’re garbage, we should change them.
Mind you, these things have been running for 10 years straight 24/7, under load most of the time, and we’ve only swapped like 2 drives, total… but no, they’re garbage 🤦…
I do agree that they’re out of date, but that wasn’t their point, their software somehow doesn’t like the NASes, so they had to look into where the problem was. But, their first thought was “let’s tell them they’re no good and tell them which ones to buy so we wouldn’t have to look at the code”.
My reasoning was, they’d have to send someone over to do tests and build the project on site, install and test, since we couldn’t give any of those NASes to them for them to work on the problem, and they’d rather not do that, since it’s a lot more work and it’s time consuming.
Yeah, they tried that a few times, the software would glitch and Windows would either BSOD or just freeze (had something to do with how their HASP dongle license communicated with the software and the drivers it used, nothing to do with our issue whatsoever, lol). In the end, they had to come down and debug the issue on a separate rig and install. We just couldn’t have those interruptions in production.
Sorry, this comment is causing me mental whiplash so I am either ignorant, am subject to non-standard circumstances, or both.
My personal experience is that developers (the decent ones at least) know hardware better than IT people. But maybe we mean different things by “hardware”?
You see, I work as a game dev so a good chunk of the technical part of my job is thinking about things like memory layout, cache locality, memory access patterns, branch predictor behavior, cache lines, false sharing, and so on and so forth. I know very little about hardware, and yet all of the above are things I need to keep in mind and consider and know to at least some usable extent to do my job.
While IT are mostly concerned on how to keep the idiots from shooting the company in the foot, by having to roll out software that allows them to diagnose, reset, install or uninstall things on, etc, to entire fleets of computers at once. It also just so happens that this software is often buggy and uses 99% of your cpu taking it for spin loops (they had to roll that back of course) or the antivirus rules don’t apply on your system for whatever reason causing the antivirus to scan all the object files generated by the compiler even if they are generated in a whitelisted directory, causing a rebuild to take an hour rather than 10 minutes.
They are also the ones that force me to change my (already unique and internal) password every few months for “security”.
So yeah, when you say that developers often have no idea how the hardware works, the chief questions that come to mind are
What kinda dev doesn’t know how hardware works to at least an usable extent?
What kinda hardware are we talking about?
What kinda hardware would an IT person need to know about? Network gear?
When IT folks say devs don’t know about hardware, they’re usually talking about the forest-level overview in my experience. Stuff like how the software being developed integrates into an existing environment and how to optimize code to fit within the bounds of reality–it may be practical to dump a database directly into memory when it’s a 500 MB testing dataset on your local workstation, but it’s insane to do that with a 500+ GB database in production environment. Similarly, a program may run fine when it’s using a NVMe SSD, but lots of environments even today still depend on arrays of traditional electromechanical hard drives because they offer the most capacity per dollar, and aren’t as prone to suddenly tombstoning when it dies like flash media. Suddenly, once the program is in production, it turns out that same program’s making a bunch of random I/O calls that could be optimized into a more sequential request or batched together into a single transaction, and now it runs like dogshit and drags down every other VM, container, or service sharing that array with it. That’s not accounting for the real dumb shit I’ve read about, like “dev hard coded their local IP address and it breaks in production because of NAT” or “program crashes because it doesn’t account for network latency.”
Game dev is unique because you’re explicitly targeting a single known platform (for consoles) or targeting for an extremely wide range of performance specs (for PC), and hitting an acceptable level of performance pre-release is (somewhat) mandatory, so this kind of mindfulness is drilled into devs much more heavily than business software dev is, especially in-house dev. Business development is almost entirely focused on “does it run without failing catastrophically” and almost everything else–performance, security, cleanliness, resource optimization–is given bare lip service at best.
Thank you for the explanation, now I understand the context on the original message. It’s definitely an entirely different environment, especially the kind of software that runs on a bunch of servers.
I have built business programs before being a game dev, still the kinds that runs on device rather than on a server. Even then, I always strived to write the most correct and performant code. Of course, I still wrote bugs like that time that a release broke the app for a subset of users because one of the database migrations didn’t apply to some real-world use case. Unfortunately, that one was due to us not having access to real world databases pr good enough surrogates due to customer policy (we were writing an unification software of sorts, up until this project every customer could give different meanings to each database column as they were just freeform text fields. Some customers even changed the schema). The migrations ran perfectly on each one of the test databases that we did have access to, but even then I did the obvious: roll the release back, add another test database that replicated the failing real world use case, fixed the failing migrations, and re released.
So yeah, from your post it sounds that either the company is bad at hiring, bad at teaching new hires, or simply has the culture of “lol who cares someone else will fix it”. You should probably talk to management. It probably won’t do anything in the majority of cases, but it’s the only way change can actually happen.
Try to schedule one on one session with your manager every 2 to 3 weeks to assess which systematic errors in the company are causing issues. 30 minutes sessions, just to make them aware of which parts of the company need fixing.
Game development is a very specific use case, and NOT what most people think of when talking about devs vs ops.
I’m talking enterprise software and SaaS companies, which would be a MUCH larger part of the tech industry then games.
There are a large number of devs who think public cloud as infrastructure is ALWAYS the right choice for cost and availability for example… Which in my experience is actually backwards, because legacy software and bad developers fail to understand the limitations of this platforms, that it’s untrustworthy by design, and outages insue.
In these scenarios understanding how the code interacts with actual hardware (network, server and storage or their IaaS counterparts) is like black magic to most devs… They don’t get why their designs are going to fall over and sink into the swamp because of their nievete. It works fine on their laptop, but when you deploy to prod and let customer traffic in it becomes a smoking hole.
I know we’re just having fun, but in the future consider adding the word “some” to statements about groups. It’s just one word, but it adds a lot of nuance and doesn’t make the joke less funny.
That 90’s brand of humor of “X group does Y” has led many in my generation to think in absolutes and to get polarized as a result. I’d really appreciate your help to work against that for future generations.
In my experience a lot of IT people are unaware of anything outside of their bubble. It’s a big problem in a lot of technical industries with people who went to school to learn a trade.
The biggest problem with the bubble that IT insulates themselves into is that if you don’t users will never submit tickets and will keep coming to you personally, then get mad when their high priority concern sits for a few days because you were out of office but the rest of the team got no tickets because the user decided they were better than a ticket.
If people only know how to summon you through the ancient ritual of ticket opening with sufficient information they’ll follow that ritual religiously to summon you when needed. If they know “oh just hit up Rob on teams, he’ll get you sorted” the mysticism is ruined and order is lost
Honestly I say this all partially jokingly. We do try to insulate ourselves because we know some users will try to bypass tickets if given any opportunity to do so, but there is very much value in balancing that need with accessability and visibility. So the safe option is to hide in your basement office and avoid mingling, but thats also the option that limits your ability to improve yourself and your organization
I meant knowledge wise. Many people in technical industries lack the ability to diagnose issues because they don’t have a true understanding of what they actually do. They follow diagnostic trees or subscribe to what I call “the rain dance” method where they know something fixes a problem but they didn’t really know why. If you mention anything outside of their small reality they will refuse to acknowledge it’s existence.
“IT people” here, operations guy who keeps the lights on for that software.
It’s been my experience developers have no idea how the hardware works, but STRONGLY believe they know more then me.
Devops is also usually more dev than ops, and it shows in the availability numbers.
Yup. Programmers who have only ever been programmers tend to act like god’s gift to this world.
As a QA person I despise those programmers.
“Sure buddy it works on your machine, but your machine isn’t going to be put into the production environment. It’s not working on mine, fix it!”
Thus, Docker was born.
“Works on my machine, ship the machine.”
My answer is usually “I don’t care how well it runs on your windows machine. Our deployments are on Linux”.
I’m a old developer that has done a lot of admin over the years out of necessity.
As a developer I can freely admit that without the operations people the software I develop would not run anywhere but on my laptop.
I know as much about hardware as a cook knows about his stove and the plates the food is served on – more than the average person but waaaay less than the people producing and maintaining them.
As a devops manager that’s been both, it depends on the group. Ideally a devops group has a few former devs and a few former systems guys.
Honestly, the best devops teams have at least one guy that’s a liaison with IT who is primarily a systems guy but reports to both systems and devops. Why?
It gets you priority IT tickets and access while systems trusts him to do it right. He’s like the crux of every good devops team. He’s an IT hire paid for by the devops team budget as an offering in exchange for priority tickets.
But in general, you’re absolutely right.
Am I the only guy that likes doing devops that has both dev and ops experience and insight? What’s with silosing oneself?
I’ve done both, it’s just a rarity to have someone experienced enough in both to be able to cross the lines.
Those are your gems and they’ll stick around as long as you pay them decently.
Hard to find.
Because the problem is that you need
The job is hard to hire for because those 3 in combo is rare. Many developers and systems guys have prickly personalities or specialise in their favourite part of it.
Devops spent have the option of prickly personalities because you have to deal with so many people outside your team that are prickly and that you have to sometimes give bad news to….
Eventually they’ll all be mad at you for SOMETHING…… and you have to let it slide. You have to take their anger and not take it personally…. That’s hard for most people, let alone tech workers that grew up idolising Linus torvalds, or Sheldon cooper and their “I’m so smart that I don’t need to be nice” attitudes.
Fantastic summary. For anyone wondering how to get really really valuable in IT, this is a great write-up of why my top paid people are my top paid people.
I’ve always found this weird. I think to be a good software developer it helps to know what’s happening under the hood when you take an action. It certainly helps when you want to optimize memory access for speed etc.
I genuinely do know both sides of the coin. But I do know that the majority of my fellow developers at work most certainly have no clue about how computers work under the hood, or networking for example.
I find it weird because, to be good at software development (and I don’t mean, following what the computer science methodology tells you, I mean having an idea of the best way to translate an idea into a logical solution that can be applied in any programming language, and most importantly how to optimize your solution, for example in terms of memory access etc) requires an understanding of the underlying systems. That if you write software that is sending or receiving network packets it certainly helps to understand how that works, at least to consider the best protocols to use.
But, it is definitely true.
There’s so many layers of abstractions that it becomes impossible to know everything.
Years ago, I dedicated a lot of time understanding how bytes travel from a server into your router into your computer. Very low-level mastery.
That education is now trivia, because cloud servers, cloudflare, region points, edge-servers, company firewalls… All other barriers that add more and more layers of complexity that I don’t have direct access to but can affect the applications I build. And it continues to grow.
Add this to the pile of updates to computer languages, new design patterns to learn, operating system and environment updates…
This is why engineers live alone on a farm after they burn out.
It’s not feasible to understand everything under the hood anymore. What’s under the hood grows faster than you can pick it up.
I’d agree that there’s a lot more abstraction involved today. But, my main point isn’t that people should know everything. But knowing the base understanding of how perhaps even a basic microcontroller works would be helpful.
Where I work, people often come to me with weird problems, and the way I solve them is usually based in low level understanding of what’s really happening when the code runs.
One may also end up developing in the areas that the above post considers inaccessible where their knowledge is likely still required.
yeah i wish it was a requirement that you’re nerdy enough to build your own computer or at least be able to install an OS before joining SWE industry. the non-nerds are too political and can’t figure out basic shit.
This is like saying before you can be a writer, you need to understand latin and the history of language.
You should if you want to be a science writer or academic, which lets be honest is a better comparison here. If your job involves latin for names and descriptions then you probably should take at least a year or two of latin if you don’t want to make mistakes here and there out of ignorance.
Before you can be a writer, you need to sharpen your own pencil.
I like informing yourself about the note taking app you’re writing with a little more. It makes it a bit more obvious that it’s kind of obvious but can have many advantages.
Personally though I don’t really see upside of building a computer as you could also just research things and not build it or vice versa. (Maybe it’s good for looking at bug reports?)
A 30 minute explanation on how CPUs work that I recently got to listen in on was likely more impactful on my C/assembly programming than building my own computer was.
you wouldn’t want somebody that hates animals to become a veterinarian just because of money-lust. the animals would suffer, the field as a whole, too. maybe they start buying up veterinary offices and squeeze the business for everything they can, resulting in worse outcomes- more animals dying and suffering, workers get shorted on benefits and pay.
people chasing money ruin things. we want an industry full of people that want to actually build things.
I don’t really see the connection to my comment.
In this example wouldn’t the programmer be more of a pharmacist? (The animal body the computer and its brain the user?)
Your statement is not wrong, it just seems unrelated.
weird, i studied latin and the history of language just because i found it interesting. i am always seeking to improve my writing skills tho.
I think software was a lot easier to visualise in the past when we had fewer resources.
Stuff like memory becomes almost meaningless when you never really have to worry about it. 64,000 bytes was an amount that made sense to people. You could imagine chunks of it. 64 billion bytes is a nonsense number that people can’t even imagine.
When I was talking about memory, I was more thinking about how it is accessed. For example, exactly what actions are atomic, and what are not on a given architecture, these can cause unexpected interactions during multi-core work depending on byte alignment for example. Also considering how to make the most of your CPU cache. These kind of things.
But that IS computer science.
It very much depends on how close to hardware they are.
If someone is working with C# or JavaScript, they are about as knowledgeable with hardware as someone working in Excel(I know this statement is tantamount to treason but as far as hardware is concerned it’s true
But if you are working with C or rust or god forbid drivers. You probably know more than the average IT professional you might even have helped correct hardware issues.
Long story short it depends.
Absolutely agree, as a developer.
The devops team set up a pretty effective setup for our devops pipeline that allows us to scale infinity. Which would be great if we had infinite resources.
We’re hitting situations where the solution is to throw more hardware at it.
And IT cannot provision tech fast enough within budget for any of this. So devs are absolutely suffering right now.
Maybe you should consider a bigger budget?
I know this is not everyone and there’re some unicorns out there but after working with hiring managers for decades i can’t help but see cheap programmers when I see Devops. It’s ether Ops people that think they are programmers or programmers that are not good enough to get hired as Software Engineers outright at higher pay. It’s like when one person is both they can do both but not great at ether one. Devops works best when it’s a team of both dev and Ops working together. IMO
20 year IT guy and I second this. Developers tend to be more troublesome than the manager wanting a shared folder for their team.
Rough and that sucks for your organization.
Our IT team would rather sit in a room with developers and solve those problems, than deal with hundreds of non-techs who struggle to add a chrome extension or make their printer icon show up.
I would love to work through issues but the stock of developers we currently have seem to either the rejects or have as someone else stated “a god complex”. They remind me of pilots in the military. All in all it is a loss for my organization.
As a developer I like to mess with everything. Currently we are doing an infrastructure migration and I had to do a lot of non-development stuff to make it happen.
Honesly I find it really usefull (but not necessary) to have some understanding of the underying processes of the code I’m working with.
I work on a team with mainly infrastructure and operations. As one of the only people writing code on the team. I have to appreciate what IT support does to keep everything moving. I don’t know why so many programmers have to get a chip on their shoulder.
Could you give an example of something related to hardware that most developers don’t know about?
Simple example, our NASes are EMC2. The devs over at the company that does the software say they’re garbage, we should change them.
Mind you, these things have been running for 10 years straight 24/7, under load most of the time, and we’ve only swapped like 2 drives, total… but no, they’re garbage 🤦…
Accurate!
Developers are frequently excited by the next hot thing or how some billionaire tech companies operate.
I’m guilty of seeing something that was last updated in 2019 and having a look of disgust.
Well, at least you admit it, not everyone does.
I do agree that they’re out of date, but that wasn’t their point, their software somehow doesn’t like the NASes, so they had to look into where the problem was. But, their first thought was “let’s tell them they’re no good and tell them which ones to buy so we wouldn’t have to look at the code”.
That sounds extremely lazy. I’d expect more from a dev team.
Me too, but apparently, that wasn’t the case.
My reasoning was, they’d have to send someone over to do tests and build the project on site, install and test, since we couldn’t give any of those NASes to them for them to work on the problem, and they’d rather not do that, since it’s a lot more work and it’s time consuming.
Couldn’t they remotely connect to them?
Yeah, they tried that a few times, the software would glitch and Windows would either BSOD or just freeze (had something to do with how their HASP dongle license communicated with the software and the drivers it used, nothing to do with our issue whatsoever, lol). In the end, they had to come down and debug the issue on a separate rig and install. We just couldn’t have those interruptions in production.
If it’s publicly accessible it likely has a bunch of vulnerabilities so I too understand that look.
Sorry, this comment is causing me mental whiplash so I am either ignorant, am subject to non-standard circumstances, or both.
My personal experience is that developers (the decent ones at least) know hardware better than IT people. But maybe we mean different things by “hardware”?
You see, I work as a game dev so a good chunk of the technical part of my job is thinking about things like memory layout, cache locality, memory access patterns, branch predictor behavior, cache lines, false sharing, and so on and so forth. I know very little about hardware, and yet all of the above are things I need to keep in mind and consider and know to at least some usable extent to do my job.
While IT are mostly concerned on how to keep the idiots from shooting the company in the foot, by having to roll out software that allows them to diagnose, reset, install or uninstall things on, etc, to entire fleets of computers at once. It also just so happens that this software is often buggy and uses 99% of your cpu taking it for spin loops (they had to roll that back of course) or the antivirus rules don’t apply on your system for whatever reason causing the antivirus to scan all the object files generated by the compiler even if they are generated in a whitelisted directory, causing a rebuild to take an hour rather than 10 minutes.
They are also the ones that force me to change my (already unique and internal) password every few months for “security”.
So yeah, when you say that developers often have no idea how the hardware works, the chief questions that come to mind are
When IT folks say devs don’t know about hardware, they’re usually talking about the forest-level overview in my experience. Stuff like how the software being developed integrates into an existing environment and how to optimize code to fit within the bounds of reality–it may be practical to dump a database directly into memory when it’s a 500 MB testing dataset on your local workstation, but it’s insane to do that with a 500+ GB database in production environment. Similarly, a program may run fine when it’s using a NVMe SSD, but lots of environments even today still depend on arrays of traditional electromechanical hard drives because they offer the most capacity per dollar, and aren’t as prone to suddenly tombstoning when it dies like flash media. Suddenly, once the program is in production, it turns out that same program’s making a bunch of random I/O calls that could be optimized into a more sequential request or batched together into a single transaction, and now it runs like dogshit and drags down every other VM, container, or service sharing that array with it. That’s not accounting for the real dumb shit I’ve read about, like “dev hard coded their local IP address and it breaks in production because of NAT” or “program crashes because it doesn’t account for network latency.”
Game dev is unique because you’re explicitly targeting a single known platform (for consoles) or targeting for an extremely wide range of performance specs (for PC), and hitting an acceptable level of performance pre-release is (somewhat) mandatory, so this kind of mindfulness is drilled into devs much more heavily than business software dev is, especially in-house dev. Business development is almost entirely focused on “does it run without failing catastrophically” and almost everything else–performance, security, cleanliness, resource optimization–is given bare lip service at best.
Thank you for the explanation, now I understand the context on the original message. It’s definitely an entirely different environment, especially the kind of software that runs on a bunch of servers.
I have built business programs before being a game dev, still the kinds that runs on device rather than on a server. Even then, I always strived to write the most correct and performant code. Of course, I still wrote bugs like that time that a release broke the app for a subset of users because one of the database migrations didn’t apply to some real-world use case. Unfortunately, that one was due to us not having access to real world databases pr good enough surrogates due to customer policy (we were writing an unification software of sorts, up until this project every customer could give different meanings to each database column as they were just freeform text fields. Some customers even changed the schema). The migrations ran perfectly on each one of the test databases that we did have access to, but even then I did the obvious: roll the release back, add another test database that replicated the failing real world use case, fixed the failing migrations, and re released.
So yeah, from your post it sounds that either the company is bad at hiring, bad at teaching new hires, or simply has the culture of “lol who cares someone else will fix it”. You should probably talk to management. It probably won’t do anything in the majority of cases, but it’s the only way change can actually happen.
Try to schedule one on one session with your manager every 2 to 3 weeks to assess which systematic errors in the company are causing issues. 30 minutes sessions, just to make them aware of which parts of the company need fixing.
Game development is a very specific use case, and NOT what most people think of when talking about devs vs ops.
I’m talking enterprise software and SaaS companies, which would be a MUCH larger part of the tech industry then games.
There are a large number of devs who think public cloud as infrastructure is ALWAYS the right choice for cost and availability for example… Which in my experience is actually backwards, because legacy software and bad developers fail to understand the limitations of this platforms, that it’s untrustworthy by design, and outages insue.
In these scenarios understanding how the code interacts with actual hardware (network, server and storage or their IaaS counterparts) is like black magic to most devs… They don’t get why their designs are going to fall over and sink into the swamp because of their nievete. It works fine on their laptop, but when you deploy to prod and let customer traffic in it becomes a smoking hole.
Apologies for the tangent:
I know we’re just having fun, but in the future consider adding the word “some” to statements about groups. It’s just one word, but it adds a lot of nuance and doesn’t make the joke less funny.
That 90’s brand of humor of “X group does Y” has led many in my generation to think in absolutes and to get polarized as a result. I’d really appreciate your help to work against that for future generations.
Totally optional. Thank you
In my experience a lot of IT people are unaware of anything outside of their bubble. It’s a big problem in a lot of technical industries with people who went to school to learn a trade.
The biggest problem with the bubble that IT insulates themselves into is that if you don’t users will never submit tickets and will keep coming to you personally, then get mad when their high priority concern sits for a few days because you were out of office but the rest of the team got no tickets because the user decided they were better than a ticket.
If people only know how to summon you through the ancient ritual of ticket opening with sufficient information they’ll follow that ritual religiously to summon you when needed. If they know “oh just hit up Rob on teams, he’ll get you sorted” the mysticism is ruined and order is lost
Honestly I say this all partially jokingly. We do try to insulate ourselves because we know some users will try to bypass tickets if given any opportunity to do so, but there is very much value in balancing that need with accessability and visibility. So the safe option is to hide in your basement office and avoid mingling, but thats also the option that limits your ability to improve yourself and your organization
I meant knowledge wise. Many people in technical industries lack the ability to diagnose issues because they don’t have a true understanding of what they actually do. They follow diagnostic trees or subscribe to what I call “the rain dance” method where they know something fixes a problem but they didn’t really know why. If you mention anything outside of their small reality they will refuse to acknowledge it’s existence.