Client R8 / Server R12

Benutzeravatar
Marc3l
Beiträge: 2475
Registriert: Mo 26. Apr 2010, 01:42
Wohnort: inside
Kontaktdaten:

Re: Client R8 / Server R12

Ungelesener Beitrag von Marc3l »

[Update 20:10 Uhr]
Vor wenigen Minuten wurde uns von DICE bekannt gegeben, dass der R8 Client an diesem Mittwoch 30.06. um 07:00 Uhr Mitteleuropäische Zeit veröffentlicht wird. Alle SwissQuake BC2 Gameserver werden an diesem morgen zwischen 07:00 und 09:00 Uhr upgedatet. Ein Client Update ist ein Muss, ansonsten man sich dann nicht auf einen R15 Server mehr verbinden kann.
Bild

Bild
Benutzeravatar
Marc3l
Beiträge: 2475
Registriert: Mo 26. Apr 2010, 01:42
Wohnort: inside
Kontaktdaten:

Re: Client R8 / Server R12

Ungelesener Beitrag von Marc3l »

Übrigends hat sich MikaelKalms zu den Verzögerrungen geäußert und einen kleinen Einblick in die Prozesse beim Patchen gewehrt.
This is a story about software engineering, file formats, build processes, and packaging. All framed within the Project Management Triangle.


Developing a game is developing a software suite, and a dataset that will go along with the software. Users will use the software (= run the game executable) to manipulate the dataset (= use mouse/keyboard/joystick to control the game).

A good game requires that the capabilities of the software matches the dataset that is being created. The software is usually refined in parallel with the dataset. In order words, the game engine and the game worlds are tailored to each other, to some extent.

The programming side of making a game corresponds fairly much to developing other kinds of software. You usually use mature languages, standardized tools, and techniques which were pioneered back in the 1960s to create a set of source code, which when built creates a single game executable.

Creating the content is less straight-forward. Sometimes there are tools that do the job well (Maya, Photoshop, SoundForge, Cubase etc). For other kinds of content, there are no good tools, so the game developers develop their own.

Raw source format is not a good format for distributing a game engine to end users. One could ship the code as-is, but that would require people to have the right compilers and software SDKs available on their machines. Distributing the code in the form of a game executable is more practical.

Raw source format is not a good format for distributing the game content either. It is convenient for editing, but the game engine will usually want the content in a different format before using it. The raw source format is often bulky, and the conversion step is lengthy. Therefore, game developers usually create custom tools which "cook" the data -- convert it from source format to something that suits the game engine better.



Cooking is good, and bad.

No cooking gives you the advantage that you can change a source file, restart the game, and the effect happens immediately in the game. It is usually easy to figure out which files on-disk contribute to what in-game.

With no cooking -- or just too little cooking -- you get very long loading times. You usually get more memory fragmentation. You also lack mechanisms to validate the integrity of the data; if you want to see that it's consistent, you have to play through the full game and exercise all aspects of the game.

Cooking gives you the advantage that you can do lots of sanity checks on the data before even starting the game. You can improve loading times a lot (especially off media with slow seek times such as DVD), and you get less memory fragmentation.

You can also create extra datastructures, which improve runtime performance. (A good example of this is the BSP & PVS trees that were used in FPses back in the 90s.)

With too much cooking, you find that it is difficult to change anything when data already has been cooked. If you want to edit just one tiny little detail, you have to re-cook ALL the data. It is difficult to tell which files on-disk contribute to what in-game.


Now let us consider the Frostbite engine and where it comes from.

It was initially used to create BFBC1. This game was released only for consoles. This means that the team which developed

BC1 had to do a certain amount of cooking - mainly to avoid excessive load times and memory fragmentation. The hard memory and performance constraints of running on a console also made it more important to pre-compute some things, and to package data into suitable archives.

With this foundation, we essentially had a game engine which solved a lot of time-consuming problems for us when we began on BFBC2 PC. Loading times would be under control, it's easy to figure out which files go into which archives, and which files/archives belong to which level. These are things which are often overlooked when not using a game engine that has been used to ship games on DVD.

We also wanted a way to automatically patch the game. Making an auto-patcher that works properly under Windows XP, Vista & Win7, knows about limited users & UAC, and can handle restarting mid-way through a patch at all takes a huge amount of time.

Therefore we took the auto-patcher from BF Heroes and modified it slightly. Voila, the game could now pull down updates from the internet and apply them to its own datafiles. We were all set.

Or so we thought.



Some complex systems seem simple on the surface; it is only when you look under the hood that they turn out to be tricky.

The tools which "cook" the game's datafiles takes in a set of original files, which are roughly 80GB in size. Most of the files here are organized by function ("this is a mesh, this is a texture, this is an animation, ...") rather than location (on which level[s] it is used). The tools will process the dataset once per level, extract the subset that applies for the current level, and convert it to a format suitable for the game engine.

This allows the tools to do some per-level precomputations easily; for instance, since all the pixel/vertex shader combinations that will be used throughout the entire level is known, the tools pre-generate all these combinations and store them in a "shader database". (BF2142 generated & compiled the shader combinations during first load - that's one reason why first launch of a level was very slow there.)

After this is done for all levels, there are a bunch of archives for each level. This is ideal for quick loading times and no memory fragmentation, but it wastes diskspace unnecessarily. The result is probably about 15GB in size.

In order to make a better tradeoff between diskspace and load times, an extra processing step has been added; all the level-archives are compared, and any datafiles which appear inside many of these level-archives are moved to a level-common archive. So when loading a level, everything inside the level's own archives and the level-common archive has to be processed. This reduced the total data size down to about 6GB.

This technique allowed BFBC2 to fit on a DVD, both for the console and the PC versions. It is not perfect by any stretch, but dealing with these large amounts of data is time consuming, and therefore you don't try every idea under the sun - rather, try what seems most likely to work first, and then keep on until the end result is good enough.



So this is all awesome when shipping the game. Where do the problems begin?

When you begin creating patches.

First off, it turns out that the tools that "cook" the data don't produce binary-identical data all the time. The result after cooking is always functionally identical, but the bit-content may differ slightly. Items in unordered lists change order, uninitialized fields contain random data, that sort of stuff.

Why didn't this get caught sooner? Because you only notice problems of this kind when re-cooking the same data several times, from scratch. And re-cooking all BC2 data takes over 48 hours for a high-end computer. And locating & correcting all places where cooking isn't deterministic down to the bit level would take a lot of time (both calendar time and effective development time). Perhaps that time is better spent elsewhere?

So. If different "cooking" runs produce slightly different results, it is suddenly difficult to look at version A and version B of the data and answer the question, "what are the differences between these two datasets?". It's easy when looking at the source data, but when looking at the cooked data there are a lot of changes which have no effect on the final game experience.

There are about 40.000 source files, and this results in well over 100.000 cooked files. Going through those by hand is not an option. Writing a perfect filter, which knows which differences are benign and which are for real, will take as much time and effort as making the cooking 100% deterministic. Neither that is an option.

So you make a filter which does something in between; be smart, create rules for the file types you know about, and when in doubt - assume that the change is for real. Err on the side of caution.


Then you realize that those shader databases were never designed to be extendable. What happens when a new object is added to a level in a patch? Its mesh & textures are included, no sweat, but what about the shader combinations? How does one create add something to the shader database, when the shader database is an opaque binary block whose entire contents may change when just one object is added to the level?
(One shader database is about 5MB. There are three shader databases per level - one each for DX9, DX10 and DX11.)


And finally, the patch system itself. Yes, it can replace portions of files on-disk. But due to its heritage (from BF Heroes), it is not able to open BFBC2's archive files and apply differences to individual files within the archives.
The only straightforward way is to make all patched-in content arrive in archives on the side of the original archives.



Given the above scenario, we end up with the situation that we have today.

Each patch gets larger than the previous, because the game drifts ever further away from what was shipped on the DVD. Changes that require shader database updates make the patch balloon in size. And we have to be careful and clever when selecting which file types to include and which to ignore when creating the patch.

And that's where we finally ran into real problems. It was too difficult for one person to identify which changes were required and which were not, and how to update the patch generation process to accommodate the latest set of changes. Most of the delay of Client R8 was because there are very few people at DICE who have the in-depth knowledge of the far-spanning corners of the game engine *and* the cooking tools *and* the patch generation process, to work out what is going wrong, why, and how to fix it.

The new content did work a long while ago - but the patch was back then approximately 7GB large. The patch had to get down to less than 1GB, or else some customers in Australia and South Africa would not be able to download it due to bandwith caps.

[As an aside - how about distributing the patch as an installer, for those who prefer that option? We would love to do so, but creating an installer that does anything out of the absolutely most ordinary installer-y requires ridiculous amounts of development time.]


I think that we have a proper Client R8 patch ready for release, but the approach we have been using thus far has been taken as far as it can go. (A bit too far, considering the delays.) We want to continue supporting the game, but if we want to change anything else than the game executable itself, we will need to spend time on figuring out how to do so with a less error-prone patch generation procedure ... and preferably without generating as large patches.

This stuff keeps me awake at night. What are your demons made of?
Quelle: http://forums.electronicarts.co.uk/batt ... aging.html
Bild

Bild
Antworten

Wer ist online?

Mitglieder in diesem Forum: 0 Mitglieder und 1 Gast