Sign In Register

How can we help you today?

Start a new topic

GameData service missing features

Hi GS Support,

 

As you introduced GameData service and the runtime collection became obsolete, we miss lots of features, which slows development and maybe forces us to use different backend.

 

Geo features were mentioned here: https://support.gamesparks.net/support/discussions/topics/1000087790 - Geospatial queries with radius, which we use to determine location of places/players around the world.

GetItem() method in cloud code - Currently it supports only search by Id, what if I want to use some custom field as it is in findOne()? This one can be workarounded by queryItems(), but in my opinion, this is ineffective.

Random id when creating new item - You need to specify id, would be nice to have another method without id parameter, so it will generate random one as Mongo did that before, this also can be workarounded and pray that random generator is random enough + add few checks, another ineffective one.

Update queries - Why I would fetch the whole document to make single value change and push the whole document in? In runtime collections I could type update query to change it without fetching whole document, this would be cool to have it.

Atomicity - When two players changed single document, Mongo locked it and then unlock and executed next query. However the persistor with withVersionCheck() only returns false, there should be more documentation how it behaves... if it handles itself (waits), or this should be looped in some cycle - again fetch data, make changes etc... until it will return true.

Fetch only needed fields - In Mongo query, you can specify which fields to return, this was great feature.


Yes, I know some of them are mirror, but I definitely like the effectivity I can achieve and have more control etc...

 

Hopefully next release will extend this new api as the idea is not bad, but the api feels kinda unfinished.

 

Thanks,

Daniel


14 people like this idea

Other basic features that are missing:
Drop Collections from Cloud Code
Query data by ID without having to create an index

 


2 people like this

Hi Guys,


Thanks for raising these questions. Let me look into these and get some answers for you. @Kevin in relation to your questions. Dropping entire collections in cloud code is generally not a great idea as full collection drops are quite intensive on system performance. As for the query issue. You can find any document by id without having to add an index but to query on other fields you will require an index to be in place.


Regards,

Liam

@Liam We have a feature which we want to reset every week. The easiest way to reset the feature is to drop the associated collection. So that our staff doesn't need to manually go in and drop the collection each week, we want to schedule the reset using cloud code. If you don't have a way to drop the collection, the best solution I can think of is to drop the entries one at a time, which I suspect would be much more intensive on system performance than dropping the collection all at once.

 

Hi Kevin,


Dropping an entire collection would not be the best way to handle this. There is no Cloud Code method for this in the new data api. Depending on the time that you want to remove the data at you could get the current time in Cloud Code and then set a TTL on the data that gets inserted do that it is automatically removed at the appropriate time for you and your team.


Regards,

Liam

Any news on this?

Hi Liam,


How can i add TTL for an entry in GDS?


Thanks

Hi Daniel,


The Game Data Service is still evolving and user input is always welcome. Regarding the specific issues you raised:


Geo features - this isn't supported at this time.

GetItem() - There's not direct 'findOne' equivalent in the new system; the optimal approach is to 'getItem' using the itemId, or failing this queryItems on and indexed field.

Random id - As mentioned above, getItem with id should always be your ideal approach to retrieving items; and to this end you should avoid using randomly generated ids where possible.

Update queries - You must retrieve or create an item prior to persisting it. This is unlikely to change.

Atomicity - This causes the persist operation to fail and returns false, so depending on the specific requirement you could use a while loop to repeat the process until the persist has succeeded.

fields - this isn't supported at this time


Thank you for the feedback, we will pass all of this on to the product team.


Regards,

Vinnie


1 person likes this

Hi Ali,


You can add a TTL via the portal using the TTL field at the bottom of the 'Insert' tab. In cloud code you can use the 'setTTL' function. If you have any difficulty implementing this please let us know.


Regards,

Vinnie

Vinne, in several other post I see gamesparks support telling us to specifically create random strings to use as IDs, PlayerID + timestamp is whats shown in examples and suggested by ither support team members. 


https://docs.gamesparks.com/tutorials/multiplayer/sharing-data-between-players-using-game-data-service.html


This is nothing but a 'random Id'...



@Liam Assuming the new GameData service still runs on an underlying MongoDB framework, dropping an entire collection at once should be much less intensive on system resources than setting a TTL on every single item in the collection and relying on the daemon to remove them one at a time...


1 person likes this
Vinnie/Support,

Is it possible to get the details covered in your response included into the documentation? Specifically, the use of TTL and version checking, along with sample code of how it's used.

Some of the other design decisions in the GameDataService such as the inability to get all records, drop collections, and limiting of indexes to 5 for both searching and sorting are a bit shortsighted. I totally support and agree with an API that steers developers in the "right direction" to develop scalable, performant data structures. However, I can't grasp why certain fundamental operations are restricted as "intensive" without giving the developers' credit to understand the appropriate and inappropriate uses of those operations. In what scenario would someone drop a collection at such frequency that it would impact server performance?

I find myself in situations where, instead of wasting a precious index on my data type so that I can filter by a timestamp during the retrieval process, I'll gather all records in a set and iterate through each one in Cloud Code. Is that more efficient than allowing a search on a non-indexed field?

Hello!
How to get the Count of elements before a query? 

var api = Spark.getGameDataService(); api.GetItem("testMarket").count(); )

 like Spark.runtimeCollection("testMarket").count();


Thanks




1 person likes this
@Denis, since getItem() requires the second ID parameter, the result will not be an array. It'll a SparkGetDataResult. If the document exists, the result.document() will not be null.

In the case of api.queryItems() which may return multiple records, it seems that the only option is to iterate through the result set and increment your own counter.


 

Hi Denis,


Lo is correct here. Currently there is no count function to run on the cursor. You would need to iterate on it and count it yourself. As the cursor limit is 100 it shouldn't be too intensive to do this. @Lo there have been times when excessive collection drops have caused issues for users. I also just want to say thanks for all of the feedback. It is being recorded and fed back to the product team for review so please keep it coming. We are listening to all the feedback that we get.


Regards,

Liam

Login to post a comment