I post this not for the article itself, but for one of the first comments underneath.
Someone posted:
"Unexpected technical issues
Really? They were unexpected? Testing didn't bring ANY of these issues up?
I could understand a few bugs might slip through the cracks but I would have thought a game publisher would not have these kind of issues after launching many games without major bugs. (I have no citation on this, by I would figure that most of their games aren't this bad on launch day)."
And a response was:
"You would be surprised at the number of so-called expert testers who ask for the latest, most powerful machines with the latest OS versions etc. as they claim that this will aid their testing. I've seen this in a company where the target machines that actual users were using were known to be older with less powerful graphics cards and some old software for compatibility with some products. I'd imagine the lure of a new machine is eve greater when they don't know what users will be using."
Speaking as someone who worked for a QA testing company for about 10 years, I will say that the second guy is right. If that's how you're doing your testing, then you're doing it wrong. Yes, we did testing on the most state-of-the-art PCs (and Macs and Linux machines) that we could get our hands on, but we also did the same testing on funky, old-ass shit (meaning software such as OS [think Windows 95/98/XP or older Linux and MacOS versions] or graphics software such as DirectX or OpenGL or whatever, as well as hardware such as decades old graphics cards and such) that nobody in their right minds would ever recommend continuing to use in this day and age, because we knew that there were people out there who were still using that old-ass shit. And we'd definitely find bugs on that shit, without a doubt. Sometimes the response of the companies we were testing for would be to, you know, actually fix the issues. Sometimes the response would be to simply post a "no longer supported" section in their ReadMe.txt or whatever for the affected software/hardware. Sometimes the response, disappointingly and vexingly, would be to simply not address (or even acknowledge) the issue at all. On less frequent occasions, we'd find bugs where it'd be broken on the new shit, but work fine on the old shit. Those tended to get fixed more promptly than the other stuff.
Then again, for this Ubisoft Assassin's Creed bullshit, a lot of the horrible broken crap shows up on the consoles as well as (or instead of) the PC stuff, so I don't know what their crummy excuse is for letting that slip through.
Someone posted:
"Unexpected technical issues
Really? They were unexpected? Testing didn't bring ANY of these issues up?
I could understand a few bugs might slip through the cracks but I would have thought a game publisher would not have these kind of issues after launching many games without major bugs. (I have no citation on this, by I would figure that most of their games aren't this bad on launch day)."
And a response was:
"You would be surprised at the number of so-called expert testers who ask for the latest, most powerful machines with the latest OS versions etc. as they claim that this will aid their testing. I've seen this in a company where the target machines that actual users were using were known to be older with less powerful graphics cards and some old software for compatibility with some products. I'd imagine the lure of a new machine is eve greater when they don't know what users will be using."
Speaking as someone who worked for a QA testing company for about 10 years, I will say that the second guy is right. If that's how you're doing your testing, then you're doing it wrong. Yes, we did testing on the most state-of-the-art PCs (and Macs and Linux machines) that we could get our hands on, but we also did the same testing on funky, old-ass shit (meaning software such as OS [think Windows 95/98/XP or older Linux and MacOS versions] or graphics software such as DirectX or OpenGL or whatever, as well as hardware such as decades old graphics cards and such) that nobody in their right minds would ever recommend continuing to use in this day and age, because we knew that there were people out there who were still using that old-ass shit. And we'd definitely find bugs on that shit, without a doubt. Sometimes the response of the companies we were testing for would be to, you know, actually fix the issues. Sometimes the response would be to simply post a "no longer supported" section in their ReadMe.txt or whatever for the affected software/hardware. Sometimes the response, disappointingly and vexingly, would be to simply not address (or even acknowledge) the issue at all. On less frequent occasions, we'd find bugs where it'd be broken on the new shit, but work fine on the old shit. Those tended to get fixed more promptly than the other stuff.
Then again, for this Ubisoft Assassin's Creed bullshit, a lot of the horrible broken crap shows up on the consoles as well as (or instead of) the PC stuff, so I don't know what their crummy excuse is for letting that slip through.
no subject
Date: 2014-11-28 04:55 pm (UTC)From:The simple fact is they wanted the game out on a particular day, regardless of whether or not the game was actually done. And that's exactly what they did.
no subject
Date: 2014-12-04 06:38 pm (UTC)From:Of course, a lot of (sensible) people simply won't buy a game if they can't read reviews of it first, which is why most companies let reviewers play the game prior to launch so that the reviews will be available by launch time. And so, yes, the fact that Ubisoft embargoed this shit should have been an astoundingly obvious red flag that the game was going to be complete shit (a red flag which, frustratingly, far too many idiots ignored, and which, obviously, is exactly what Ubisoft was banking on all along). But then, because of all the perks and other underhanded shit that goes on between the game "journalists" and the companies, and all the 9.8 out of 10 reviews or whatever for games that turn out to be broken, unplayable horseshit, a lot of people (myself included) no longer trust the "professional" reviews anyway, so they may as well not even exist at all, as far as that goes.
If everyone would just do what I do, i.e. don't buy games at launch at all ever (http://kane-magus.livejournal.com/597285.html) anymore, a lot of the problems we're seeing with the game industry would disappear pretty quickly. Well, either that or the the game industry itself would be severely curtailed, which wouldn't be a bad thing either. (And I am dead serious when I say that I would not cry a bit if there came another big industry crash (http://en.wikipedia.org/wiki/North_American_video_game_crash_of_1983) someday, preferably sooner than later.) But most game buyers seem to be far too impatient to do the sensible thing these days, which is why the Ubisofts and the EAs and all the other terrible game companies are able to still exist at all.