Greetings! Today I would like to talk about one topic that any game designer comes across in one way or another. And this topic is
Imagine you have 100,500 kinds of different swords in the game and all of them suddenly need to raise their base damage a little. Usually, in this case, the good old Excel is harnessed, and the results are then inserted into JSON / XML by hand or using regulars, but this is long, troublesome and fraught with validation errors.
Let's see how Google Spreadsheets and the built-in Google Spreadsheets can be suitable for such purposesGoogle Apps Script and can you save time on it.
I will make a reservation in advance that we are talking about statics for f2p -games or games-services, which are characterized by regular updates of mechanics and replenishment of content, i.e. the above process is ± constant.
So, to edit the same swords, you need to perform three operations:
- extract the current damage indicators (if you do not have ready-made calculation tables);
- calculate updated values in good old Excel;
- transfer new values to game JSONs.
As long as you have a ready-made tool and it suits you, everything is fine and you can edit the way you are used to. But what if the tool is missing? Or even worse, there is no game itself. is it still in development? In this case, in addition to editing existing data, you also need to decide where to store it and what structure it will have.
With storage, it is still more or less clear and standardized: in most cases, static is just a set of separate JSONs lying somewhere in the VCS... There are, of course, more exotic cases when everything is stored in a relational (or not so) database, or, worst of all, in XML. But, if you chose them, and not ordinary JSON, then most likely you already have good reasons for that, because the performance and usability of these options are highly questionable.
But as for the structure of the statics and its editing - changes will often be radical and daily. Of course, in some situations, nothing can replace the efficiency of regular Notepad ++, coupled with regulars, but we still want a tool with a lower entry threshold and convenience for editing by a command.
The banal and well-known Google Spreadsheets came up to me personally as such a tool. Like any tool, it has its pros and cons. I will try to consider them from the point of view of the State Duma.
pros | Minuses |
---|---|
|
|
For me, the pluses outweighed the minuses significantly, and in this regard, it was decided to try to find a workaround for each of the minuses presented.
What happened in the end?
In Google Spreadsheets, a separate document has been made, in which there is a Main sheet, where we control the unloading, and the rest of the sheets, one for each game object.
At the same time, in order to fit the usual nested JSON into a flat table, we had to re-invent the bicycle a little. Let's say we had the following JSON:
{
"test_craft_01": {
"id": "test_craft_01",
"tags": [ "base" ],
"price": [ {"ident": "wood", "count":100}, {"ident": "iron", "count":30} ],
"result": {
"type": "item",
"id": "sword",
"rarity_wgt": { "common": 100, "uncommon": 300 }
}
},
"test_craft_02": {
"id": "test_craft_02",
"price": [ {"ident": "sword", "rarity": "uncommon", "count":1} ],
"result": {
"type": "item",
"id": "shield",
"rarity_wgt": { "common": 100 }
}
}
}
In tables, this structure can be represented as a pair of values "full path" - "value". From here was born a self-made path markup language in which:
- text is a field or object
- / - hierarchy separator
- text [] - array
- #number - the index of the element in the array
Thus, the JSON will be written to the table as follows:
Accordingly, adding a new object of this type is another column in the table and, if the object had any special fields, then expanding the list of strings with keys in the keypath.
The division into root and other levels is an added convenience for using filters in a table. For the rest, a simple rule works: if the value in the object is not empty, then we will add it to JSON and unload it.
In case new fields are added to JSON and someone makes a mistake on the path, it is checked by the following regular regular at the level of conditional formatting:
=if( LEN( REGEXREPLACE(your_cell_name, "^[a-zA_Z0-9_]+(\[\])*(\/[a-zA_Z0-9_]+(\[\])*|\/\#*[0-9]+(\[\])*)*", ""))>0, true, false)
And now about the unloading process. To do this, go to the Main sheet, select the desired objects for upload in the #ACTION column and ...
click on Palpatine (͡ ° ͜ʖ ͡ °)
As a result, a script will be launched that will take data from the sheets specified in the #OBJECT field and unload them to JSON. The upload path is specified in the #PATH field, and the location where the file will be uploaded is your personal Google Drive associated with the Google account under which you are viewing the document.
The #METHOD field allows you to configure how you want to upload JSON:
- If single - one file is uploaded with a name equal to the name of the object (without emoji, of course, they are here only for readability)
- If separate - each object from the sheet will be unloaded into a separate JSON.
The remaining fields are more informational in nature and allow you to understand how many objects are now ready for unloading and who unloaded them last.
When trying to implement an honest call to the export method, I came across an interesting feature of spreadsheets: you can hang a function call on a picture, but you cannot specify arguments in the call of this function. After a short period of frustration, it was decided to continue the experiment with the bicycle and the idea of marking the data sheets themselves was born.
So, for example, anchors ### data ### and ### end_data ### appeared in the tables on the data sheets, by which the attribute areas for uploading are determined.
Source codes
Accordingly, what the JSON collection looks like at the code level:
- We take the #OBJECT field and look for all the data of the sheet with this name
var sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName(name)
- , ( , == )
function GetAnchorCoordsByName(anchor, data){ var coords = { x: 0, y: 0 } for(var row=0; row<data.length; row++){ for(var column=0; column<data[row].length; column++){ if(data[row][column] == anchor){ coords.x = column; coords.y = row; } } } return coords; }
- , ( ###enable### true|false)
function FilterActiveData(data, enabled){ for(var column=enabled.x+1; column<data[enabled.y].length; column++){ if(!data[enabled.y][column]){ for(var row=0; row<data.length; row++){ data[row].splice(column, 1); } column--; } } return data }
- ###data### ###end_data###
function FilterDataByAnchors(data, start, end){ data.splice(end.y) data.splice(0, start.y+1); for(var row=0; row<data.length; row++){ data[row].splice(0,start.x); } return data; }
-
function GetJsonKeys(data){ var keys = []; for(var i=1; i<data.length; i++){ keys.push(data[i][0]) } return keys; }
-
// . // , single-file, - . // - , separate JSON- function PrepareJsonData(filteredData){ var keys = GetJsonKeys(filteredData) var jsonData = []; for(var i=1; i<filteredData[0].length; i++){ var objValues = GetObjectValues(filteredData, i); var jsonObject = { "objName": filteredData[0][i], "jsonBody": ParseToJson(keys, objValues) } jsonData.push(jsonObject) } return jsonData; } // JSON ( -) function ParseToJson(fields, values){ var outputJson = {}; for(var field in fields){ if( IsEmpty(fields[field]) || IsEmpty(values[field]) ){ continue; } var key = fields[field]; var value = values[field]; var jsonObject = AddJsonValueByPath(outputJson, key, value); } return outputJson; } // JSON function AddJsonValueByPath(jsonObject, path, value){ if(IsEmpty(value)) return jsonObject; var nodes = PathToArray(path); AddJsonValueRecursive(jsonObject, nodes, value); return jsonObject; } // string function PathToArray(path){ if(IsEmpty(path)) return []; return path.split("/"); } // , , - function AddJsonValueRecursive(jsonObject, nodes, value){ var node = nodes[0]; if(nodes.length > 1){ AddJsonNode(jsonObject, node); var cleanNode = GetCleanNodeName(node); nodes.shift(); AddJsonValueRecursive(jsonObject[cleanNode], nodes, value) } else { var cleanNode = GetCleanNodeName(node); AddJsonValue(jsonObject, node, value); } return jsonObject; } // JSON. . function AddJsonNode(jsonObject, node){ if(jsonObject[node] != undefined) return jsonObject; var type = GetNodeType(node); var cleanNode = GetCleanNodeName(node); switch (type){ case "array": if(jsonObject[cleanNode] == undefined) { jsonObject[cleanNode] = [] } break; case "nameless": AddToArrayByIndex(jsonObject, cleanNode); break; default: jsonObject[cleanNode] = {} } return jsonObject; } // function AddToArrayByIndex(array, index){ if(array[index] != undefined) return array; for(var i=array.length; i<=index; i++){ array.push({}); } return array; } // ( , ) function AddJsonValue(jsonObject, node, value){ var type = GetNodeType(node); var cleanNode = GetCleanNodeName(node); switch (type){ case "array": if(jsonObject[cleanNode] == undefined){ jsonObject[cleanNode] = []; } jsonObject[cleanNode].push(value); break; default: jsonObject[cleanNode] = value; } return jsonObject } // . // object - // array - , // nameless - , - function GetNodeType(key){ var reArray = /\[\]/ var reNameless = /#/; if(key.match(reArray) != null) return "array"; if(key.match(reNameless) != null) return "nameless"; return "object"; } // JSON function GetCleanNodeName(node){ var reArray = /\[\]/; var reNameless = /#/; node = node.replace(reArray,""); if(node.match(reNameless) != null){ node = node.replace(reNameless, ""); node = GetNodeValueIndex(node); } return node } // nameless- function GetNodeValueIndex(node){ var re = /[^0-9]/ if(node.match(re) != undefined){ throw new Error("Nameless value key must be: '#[0-9]+'") } return parseInt(node-1) }
- JSON Google Drive
// , : , ( ) string . function CreateFile(path, filename, data){ var folder = GetFolderByPath(path) var isDuplicateClear = DeleteDuplicates(folder, filename) folder.createFile(filename, data, "application/json") return true; } // GoogleDrive function GetFolderByPath(path){ var parsedPath = ParsePath(path); var rootFolder = DriveApp.getRootFolder() return RecursiveSearchAndAddFolder(parsedPath, rootFolder); } // function ParsePath(path){ while ( CheckPath(path) ){ var pathArray = path.match(/\w+/g); return pathArray; } return undefined; } // function CheckPath(path){ var re = /\/\/(\w+\/)+/; if(path.match(re)==null){ throw new Error("File path "+path+" is invalid, it must be: '//.../'"); } return true; } // , , - . // - , .. function DeleteDuplicates(folder, filename){ var duplicates = folder.getFilesByName(filename); while ( duplicates.hasNext() ){ duplicates.next().setTrashed(true); } } // , , function RecursiveSearchAndAddFolder(parsedPath, parentFolder){ if(parsedPath.length == 0) return parentFolder; var pathSegment = parsedPath.splice(0,1).toString(); var folder = SearchOrCreateChildByName(parentFolder, pathSegment); return RecursiveSearchAndAddFolder(parsedPath, folder); } // parent name, - function SearchOrCreateChildByName(parent, name){ var childFolder = SearchFolderChildByName(parent, name); if(childFolder==undefined){ childFolder = parent.createFolder(name); } return childFolder } // parent name function SearchFolderChildByName(parent, name){ var folderIterator = parent.getFolders(); while (folderIterator.hasNext()){ var child = folderIterator.next(); if(child.getName() == name){ return child; } } return undefined; }
Done! Now we go to Google Drive and take our file there.
Why was it necessary to fiddle with files in Google Drive, and why not post directly to Git? Basically - only so that you can check the files before they flew to the server
What could not be solved normally: when conducting various A / B tests, it always becomes necessary to create separate branches of statics, in which part of the data changes. But since in fact this is another copy of the dict, we can copy the spreadsheet itself for the A / B-test, change the data in it and from there unload the data for the test.
Conclusion
How does such a decision end up cope? Surprisingly fast. Provided that most of this work is already done in spreadsheets, using the right tool turned out to be the best way to reduce development time.
Due to the fact that the document almost does not use formulas that lead to cascading updates, there is practically nothing to slow down. Transferring balance calculations from other tables now generally takes a minimum of time, since you just need to go to the desired sheet, set filters and copy values.
The main performance bottleneck is the Google Drive API: searching and deleting / creating files takes the maximum time, only uploading not all files at once or uploading a sheet not as separate files, but in a single JSON helps.
I hope this tangle of perversions will be useful for those who are still editing JSONs with their hands and regulars, and also perform balance calculations of statics in Excel instead of Google Spreadsheets.
Links
Example of a spreadsheet-exporter
Link to a project in Google Apps Script