Sign In Register

How can we help you today?

Start a new topic
Answered

Is it possible to iterate over a SparkMongoCursor twice?

Hi,


like the title sais. Is there a rewind() or reset() method? If not, could you move it to suggestions? :)


Best Answer

Which approach would be less expensive when I need to iterate over a result n times?


a)

  

var cursor = Spark.runtimeCollection("col").find(query);
var doc;

while(cursor.hasNext())
{
    doc = cursor.next();
    // do something with doc
}

// get new cursor
cursor = Spark.runtimeCollection("col").find(query);

while(cursor.hasNext())
{
    doc = cursor.next();
    // do some other stuff with doc
}

 b)

  

var result = Spark.runtimeCollection("col").find(query).toArray();
var doc;

for(var i = 0, n = result.length; i < n; i++)
{
    doc = result[i];
    // do something with doc
}

for(var i = 0, n = result.length; i < n; i++)
{
    doc = result[i];
    // do some other stuff with doc
}

In the first approach I would save the document (if modified) to have changes made in the first iteration available in the second iteration.

In the second approach I would iterate one useless time on fetch over the result. Also the object could be huge. But I have less database calls (no saving in between needed, because the object reference persists).

A compromise is to make the first iteration with the SparkMongoCursor and push everything to an array for later usage. But this is not very usefull in many cases.

Also in my example I used 2 iterations, but the actual question is which version performs better with n iterations.

Would be interesting to know how other people handle this.


Cheers

David


Hi David,


I'll pass this on as a feature request.

Cheers.


Oisin

Answer

Which approach would be less expensive when I need to iterate over a result n times?


a)

  

var cursor = Spark.runtimeCollection("col").find(query);
var doc;

while(cursor.hasNext())
{
    doc = cursor.next();
    // do something with doc
}

// get new cursor
cursor = Spark.runtimeCollection("col").find(query);

while(cursor.hasNext())
{
    doc = cursor.next();
    // do some other stuff with doc
}

 b)

  

var result = Spark.runtimeCollection("col").find(query).toArray();
var doc;

for(var i = 0, n = result.length; i < n; i++)
{
    doc = result[i];
    // do something with doc
}

for(var i = 0, n = result.length; i < n; i++)
{
    doc = result[i];
    // do some other stuff with doc
}

In the first approach I would save the document (if modified) to have changes made in the first iteration available in the second iteration.

In the second approach I would iterate one useless time on fetch over the result. Also the object could be huge. But I have less database calls (no saving in between needed, because the object reference persists).

A compromise is to make the first iteration with the SparkMongoCursor and push everything to an array for later usage. But this is not very usefull in many cases.

Also in my example I used 2 iterations, but the actual question is which version performs better with n iterations.

Would be interesting to know how other people handle this.


Cheers

David

Has there been any development on rewind feature? Would be really useful.

Any update on this issue?


Honestly, many features of MongoDB are missing here that are actually basic and they would greatly improve flexibility and performance.


Dear GameSparks team, can you update this? I see apart from this example, that a big part of the MongoDB API should be updated / improved, it is not even possible to query some functions on cursors for better processing of results.


Is the GameSparks platform improvement still on focus or do you put your focus on other Amazon backend services? 


Please think about it.

Gamesparks is a dead platform now.



Login to post a comment