Sign In Register

How can we help you today?

Start a new topic
Answered

How resilient is SparkScheduler?

https://docs.gamesparks.net/documentation/cloud-code-api/utils-cloud-code-api/sparkscheduler

Neat mechanism!


How confident should I be in the execution of referenced module at/around the delay time indicated?


How does it work behind the scenes? (no need to be super technical - but curious if it uses some sort of language-specific scheduling/deferred event thing or just stores a timestamp somewhere and has a periodic job that checks every so often?)


thanks!



Best Answer

Hey Jeff, 

The GameSparks platform comprises of Core Services(Developer Portal, Standard Analytics etc.), Load Balancers (Geographically Distributed) and Runtime Clusters(Independent Game Instances etc.). 


A GameSparks Runtime Cluster contains the resources necessary to run a Game i.e : 

  • API Servers
  • Data Stores
The config for each cluster is stored in services from our cloud providers (Azure Table Service, Azure Blob Storage, Amazon SimpleDB, Amazon S3). Each of these components are redundant meaning we can handle the loss of multiple servers. To move on to your specific question, games can be (and are frequently) migrated between Clusters with minimal downtime. As the config etc. is backed up externally on the above services, it will survive any potential service outage/ migration and this includes anything instance under the SparkScheduler class. 

Does this clarify things for you? Happy to answer any further questions you may have.

Best Regards, Patrick. 


1 person has this question

Curious about this as well. 


Furthermore is it guaranteed to execute eventually even if it perhaps fires a little late due to some issue?

Hi Jeff,

The SparkScheduler runs on it's own thread and keeps track of it's native server time(UTC). In terms of resilience, our servers are running on both Microsoft Azure and Amazon Web Services. Both highly dependable and versatile. So in that regard, you can be sure that Schedulers will fire on time, and consistently. 

Best Regards, Patrick. 

Thanks for the quick response, Patrick - let me ask that a different way - 

What if my running gamesparks instance (whatever that looks like) has to be 'moved' or restored or migrated as part of a normal part of load management or internal maintenance - can I assume that anything scheduled with SparkScheduler will 'survive' that process  or no?

(I'm looking at it for suitability of chest-like timers - hours (or even days) in the future)


thanks!!

Answer

Hey Jeff, 

The GameSparks platform comprises of Core Services(Developer Portal, Standard Analytics etc.), Load Balancers (Geographically Distributed) and Runtime Clusters(Independent Game Instances etc.). 


A GameSparks Runtime Cluster contains the resources necessary to run a Game i.e : 

  • API Servers
  • Data Stores
The config for each cluster is stored in services from our cloud providers (Azure Table Service, Azure Blob Storage, Amazon SimpleDB, Amazon S3). Each of these components are redundant meaning we can handle the loss of multiple servers. To move on to your specific question, games can be (and are frequently) migrated between Clusters with minimal downtime. As the config etc. is backed up externally on the above services, it will survive any potential service outage/ migration and this includes anything instance under the SparkScheduler class. 

Does this clarify things for you? Happy to answer any further questions you may have.

Best Regards, Patrick. 

Thanks for the additional details, Patrick...

I guess what I was trying to figure out if things 'scheduled' via SparkScheduler had their 'expiration' times persisted somewhere that would survive a restart rather than some sort of a ScheduledTreadPoolExecutor (java example). 

Sall good.

What I read from your notes above was "GameSparks Awesome.  SparkScheduler Rock Solid.  Schedule Away!"

So we can close this as "Answered".


thanks again!

You're more than welcome Jeff, glad I could be of help. 

Best Regards, Patrick. 

Login to post a comment