I need to generate a unique id for each user when an install occurs which can be shared freely amongst other users. Format of the id is numeric. Id's can be sequential. I can think of several approaches to this such as, keeping a single value in a collection which is incremented each time the next id is requested. This obviously would rely on strict read consistency being setup for monogdb so I'm interested in if this is the case or not (i'm assuming so).
I don't want to expose the underlying player id and it's also a lot for a user to enter when searching for other players. Do you guys have any suggestions on this?
Thanks!
- Toby
Best Answer
T
Tech Support
said
almost 8 years ago
Hi Toby
You can use mongo's findAndModify functionality to have an atomic counter that you can be sure will only every be one value returned for each call.
var result = Spark.runtimeCollection("mycounters").findAndModify({"_id":"firstcounter"}, {}, {}, false, {"$inc" : {"counter" : 1}}, true, true);
var newCounterValue = result.counter;
This will create a collection called mycounters, with a single document that will atomically increment each time you make the call and return the new value.
Using this approach, you could have multiple counters in the same collection with a different id. As it also uses upsert, you don't need to worry about pre creating the collection or the document.
You can use mongo's findAndModify functionality to have an atomic counter that you can be sure will only every be one value returned for each call.
var result = Spark.runtimeCollection("mycounters").findAndModify({"_id":"firstcounter"}, {}, {}, false, {"$inc" : {"counter" : 1}}, true, true);
var newCounterValue = result.counter;
This will create a collection called mycounters, with a single document that will atomically increment each time you make the call and return the new value.
Using this approach, you could have multiple counters in the same collection with a different id. As it also uses upsert, you don't need to worry about pre creating the collection or the document.
Hope that helps
Gabriel
T
Toby Gladwell
said
almost 8 years ago
Thanks Gabriel for your prompt response, this was the approach i was going to follow - I'll implement as prescribed.
- Toby
H
Hougant Chen
said
almost 7 years ago
this is useful info.
would you consider using this method to give a globally unique id for let's say generated item data? (we may be in the hundreds of thousands of items)
what would be some drawbacks of using this method for that case? if servers are in shards, would it present an issue?
we could generate a guid, but that might be overkill for our purposes. would prefer using a number for fast lookup on client side.
T
Tom Looman
said
almost 6 years ago
Is this counter in int32 or int64? The first could potentially run out after "a long time". Is there a built-in GUID generator in cloud code too?
Toby Gladwell
Hi,
I need to generate a unique id for each user when an install occurs which can be shared freely amongst other users. Format of the id is numeric. Id's can be sequential. I can think of several approaches to this such as, keeping a single value in a collection which is incremented each time the next id is requested. This obviously would rely on strict read consistency being setup for monogdb so I'm interested in if this is the case or not (i'm assuming so).
I don't want to expose the underlying player id and it's also a lot for a user to enter when searching for other players. Do you guys have any suggestions on this?
Thanks!
- Toby
Hi Toby
You can use mongo's findAndModify functionality to have an atomic counter that you can be sure will only every be one value returned for each call.
var result = Spark.runtimeCollection("mycounters").findAndModify({"_id":"firstcounter"}, {}, {}, false, {"$inc" : {"counter" : 1}}, true, true);
var newCounterValue = result.counter;
This will create a collection called mycounters, with a single document that will atomically increment each time you make the call and return the new value.
Using this approach, you could have multiple counters in the same collection with a different id. As it also uses upsert, you don't need to worry about pre creating the collection or the document.
Hope that helps
Gabriel
- Oldest First
- Popular
- Newest First
Sorted by Oldest FirstTech Support
Hi Toby
You can use mongo's findAndModify functionality to have an atomic counter that you can be sure will only every be one value returned for each call.
var result = Spark.runtimeCollection("mycounters").findAndModify({"_id":"firstcounter"}, {}, {}, false, {"$inc" : {"counter" : 1}}, true, true);
var newCounterValue = result.counter;
This will create a collection called mycounters, with a single document that will atomically increment each time you make the call and return the new value.
Using this approach, you could have multiple counters in the same collection with a different id. As it also uses upsert, you don't need to worry about pre creating the collection or the document.
Hope that helps
Gabriel
Toby Gladwell
Thanks Gabriel for your prompt response, this was the approach i was going to follow - I'll implement as prescribed.
- Toby
Hougant Chen
this is useful info.
would you consider using this method to give a globally unique id for let's say generated item data? (we may be in the hundreds of thousands of items)
what would be some drawbacks of using this method for that case? if servers are in shards, would it present an issue?
we could generate a guid, but that might be overkill for our purposes. would prefer using a number for fast lookup on client side.
Tom Looman
Is this counter in int32 or int64? The first could potentially run out after "a long time". Is there a built-in GUID generator in cloud code too?
-
Documentation Notes
-
Design issues with user events
-
Using NoSQL
-
Runtime Collections vs Metadata Collections
-
Anonymous authentication from browser app
-
Modules
-
Movement With Unity
-
Problem with url parameters for downloadables
-
Querying NoSql GameSparks database
-
Challenge accesType
See all 2487 topics