Sign In Register

How can we help you today?

Start a new topic
Answered

Design issues with user events

I'm building save and load features for storing the logical structure of user created animations. The animations are stored as json and I've created a SaveTrack event for saving them.


The animation json can be relatively large (up to 200kB or more) and the user may want to save frequently, so I'd like to have an overwrite function as well to avoid the creation of bloated, unneeded data.


I've had to hack more than what I'm used to to get my current implementation working - here are some of the design hacks.

  • The autocreated SaveTrack-events collection appears to be useless - it cannot be accessed from cloud code and I wonder what the design idea is for its existence. It also counters the overwrite save feature, as all input data for every event is stored as long as an error is not set for the response.
  • I've used Spark.setScriptError to disable event logging to SaveTracks-events because it would store all animation json data from every save event but the data couldn't be accessed anyway. Instead I use a separate runtime collection for saving new copies and overwriting previous copies as necessary. This however makes it more difficult to detect genuine errors in the response (eg. a missing input parameter) as the error is always set even for successful responses.
  • When I create a new document to the SavedTracks runtime collection, I'll have to use a temporary SearchId field to retrieve the actual $oid value of the newly created document. This id is then sent to client in response (as an error!) so the client can use it to indicate saving overwrite or new copy in upcoming SaveTrack events. This messy process of retrieving the id of a new document could be streamlined by providing a method for generating $oid values from JavaScript.


Is there a better or less hacky way to implement this functionality?


SaveTrack cloud code:

 

var trackData = { "Metadata" : Spark.data["Metadata"], "Data" : Spark.data["Data"] };

var tracksCollection = Spark.runtimeCollection("SavedTracks");

var id = Spark.data["Id"];
if (id === "")
{
    // No id specified - create a new document and return its id in response
    // Use a temporary search id to allow the new document to be retrieved after creation
    var searchId = makeId();
    trackData["SearchId"] = searchId;
    tracksCollection.save(trackData);
    var trackDataWithId = tracksCollection.findOne({"SearchId" : searchId});
    // Overwrite without the temporary search id
    delete trackDataWithId["SearchId"];
    tracksCollection.save(trackDataWithId);
    
    id = trackDataWithId["_id"]["$oid"];
}
else
{
    // Overwrite using id from request
    trackData["_id"] = { "$oid" : id };
    tracksCollection.save(trackData);
}

// Return id in response error
// This will also ensure the event is not stored in SaveTrack-events
Spark.setScriptError("Id" , id);


function makeId()
{
    var text = "";
    var possible = "0123456789abcdef";

    for( var i=0; i < 5; i++ )
        text += possible.charAt(Math.floor(Math.random() * possible.length));

    return text;
}

 




Best Answer

Hey Jussi


Yes, we spotted this this week, it will be addressed in the release tomorrow morning.


Our code :

 

var collection = Spark.runtimeCollection("IdTest");
var document = {};
collection.save(document);

Spark.setScriptData("result1", document);
Spark.setScriptData("result2", document["_id"]);
Spark.setScriptData("result3", document["_id"]["$oid"]);

 

returns

 

{
 "@class": ".LogEventResponse",
 "requestId": "1398717133604",
 "scriptData": {
  "result3": "535ebae53004a862fef69155",
  "result2": {
   "$oid": "535ebae53004a862fef69155"
  },
  "result1": {
   "_id": {
    "$oid": "535ebae53004a862fef69155"
   }
  }
 }
}

 


Gabriel


Hi Jussi


The auto created collection is used internally for analytics. We could hide these collection, but have the view that if we have data of yours then it should be accessible (and visible)


I don't think it's a good idea to use setScriptError to stop the auto created event being saved for exactly the reason you mention. You cannot use the other validation the platform provides.


I see the issue with trying to get the newly created id's. You could generate your own id. A combination of playerId & current date time is usually enough. If you set a value against the "_id" attribute of the document before saving it will be used rather than one being auto created.


Something like this:


trackData["_id"] = Spark.getPlayer().getPlayerId() + "-" + new Date().getTime();


Hope that help!


Gabriel


Cleaned up the id creation logic and disabled the setScriptError call. I could see the data size become a scalability issue with SaveTrack event if the service ever reached hundreds or thousands of users, but that's not likely to happen in foreseeable future so I'll look into it if it ever becomes a relevant issue. Would it be feasible to build the feature using uploadFile feature instead? I haven't experimented with this approach yet so have no idea of potential issues. 


 

var trackData = { "Metadata" : Spark.data["Metadata"], "Data" : Spark.data["Data"] };

var tracksCollection = Spark.runtimeCollection("SavedTracks");

var id = Spark.data["Id"];
if (id === "")
{
    // No id specified - create a new document with a new id
    id = createId();
}
else
{
    // Overwrite existing document using the id from request
    trackData["_id"] = { "$oid" : id };
}

tracksCollection.save(trackData);

// Return id in response
Spark.setScriptData("Id" , id);

function createId() {
    // Timestamp 8 hex values
    var timestamp = (Math.floor(new Date().valueOf() / 1000)).toString(16);
    // 16 random hex values
	var random1 = (Math.floor(Math.random() * (4294967296))).toString(16);
    var random2 = (Math.floor(Math.random() * (4294967296))).toString(16);
    return '00000000'.substr(0, 8 - timestamp.length) + timestamp +
           '00000000'.substr(0, 8 - random1.length) + random1 +
           '00000000'.substr(0, 8 - random2.length) + random2;
};

 

Hi Jussi


Were making a change to add the "_id" field to a saved document if it's not already supplied. This will allow you to get the id of a newly saved document without having to go through the hassle of trying to create one yourself.


Thanks for pointing out the missing feature! Keep it up!


Gabriel

I don't quite understand what you mean. Are you making an update so that after tracksCollection.save(trackData)  the _id field would be added to trackData? The _id field is already added to the persisted collection but the problem with the earlier code was that the document had to be re-read from the collection in order to get the id.

Yes, the _id field would be added to the trackData object by our framework.


In the following example:

 

var trackData = { "Metadata" : Spark.data["Metadata"], "Data" : Spark.data["Data"] };
 
var tracksCollection = Spark.runtimeCollection("SavedTracks");

tracksCollection.save(trackData);

var newId = trackData["_id"]

 

The var newId would be the value of the id assigned by mongo.

I see that you've now added this feature to release. However there's an issue in accessing the $oid field after a save operation - I get "Access to Java class "org.bson.types.ObjectId" is prohibited" error when trying to retrieve the $oid value. The code below illustrates the issue.


 

var collection = Spark.runtimeCollection("IdTest");
var document = {};
collection.save(document);

Spark.setScriptData("result", document);
/* OK
  "result": {
   "_id": {
    "$oid": "535eb4cee4b0f8cf77241025"
   }
*/

Spark.setScriptData("result", document["_id"]);
/* OK
  "result": {
   "$oid": "535eb508e4b0f8cf7724112b"
  }
*/

Spark.setScriptData("result", document["_id"]["$oid"]);
/* Cannot retrieve id value
 "error": {
  "message": "Access to Java class \"org.bson.types.ObjectId\" is prohibited. (140283-event-IdTest#27)"
 },
*/

 

Answer

Hey Jussi


Yes, we spotted this this week, it will be addressed in the release tomorrow morning.


Our code :

 

var collection = Spark.runtimeCollection("IdTest");
var document = {};
collection.save(document);

Spark.setScriptData("result1", document);
Spark.setScriptData("result2", document["_id"]);
Spark.setScriptData("result3", document["_id"]["$oid"]);

 

returns

 

{
 "@class": ".LogEventResponse",
 "requestId": "1398717133604",
 "scriptData": {
  "result3": "535ebae53004a862fef69155",
  "result2": {
   "$oid": "535ebae53004a862fef69155"
  },
  "result1": {
   "_id": {
    "$oid": "535ebae53004a862fef69155"
   }
  }
 }
}

 


Gabriel

Login to post a comment