Skip to main content
Code Review

Return to Answer

Commonmark migration
Source Link

The built-in facilities should take care of this for you.

You can just merge them with concat and then use Array's sort method, which takes an optional custom comparator (in your case, "compare by offset"). This should run in O(n log(n)):

function merge(arrays) {
 return arrays.reduce((m,x) => m.concat(x), [])
 .sort((a,b) => a.offset - b.offset);
}

NOTE: This doesn't take full advantage of the fact that the individual arrays are already sorted (which will give an O(n) solution), but it's so much simpler it's worth checking if it meets your needs before optimizing further.

##UPDATE

UPDATE

I went ahead and did a rewrite with some guesses at micro-optimizations that might help (or might not). Try it on your sample data and see what happens.

The code is a bit more compact than your original, but still not what I consider highly readable. I had to sacrifice some readability to write this in an optimized, procedural way:

function merge(arrays) {
 var indexes = Array(arrays.length).fill(0);
 var result = [];
 var minIndex, minValue, minObject; // for minByOffset method
 const totalLength = arrays.reduce((sum, arr) => sum + arr.length, 0);
 const infiniteOffset = {offset: Infinity};
 const nextUnusedElm = (arr,i) => arr[indexes[i]] || infiniteOffset;
 while (result.length < totalLength) {
 var candidates = arrays.map(nextUnusedElm);
 minByOffset(candidates);
 indexes[minIndex]++;
 result.push(minObject);
 }
 return result;
 function minByOffset(arr) {
 minValue = Infinity;
 arr.forEach((x,i) => {
 if (x.offset >= minValue) return;
 minIndex = i;
 minObject = x;
 minValue = x.offset;
 });
 }
}

A final optimization you could make (again, it may or may not have much of an effect) is to remove an array once all of its elements have been used up. That is, restructure the code to avoid some unnecessary comparisons. In your code, it would avoid:

if (state[i] >= arrays[i].length) {

In my version, it would avoid:

arr[indexes[i]] || infiniteOffset;

That is, an extra array lookup and a comparison inside the loop. These are fast operations, so the optimization may not matter...

The built-in facilities should take care of this for you.

You can just merge them with concat and then use Array's sort method, which takes an optional custom comparator (in your case, "compare by offset"). This should run in O(n log(n)):

function merge(arrays) {
 return arrays.reduce((m,x) => m.concat(x), [])
 .sort((a,b) => a.offset - b.offset);
}

NOTE: This doesn't take full advantage of the fact that the individual arrays are already sorted (which will give an O(n) solution), but it's so much simpler it's worth checking if it meets your needs before optimizing further.

##UPDATE

I went ahead and did a rewrite with some guesses at micro-optimizations that might help (or might not). Try it on your sample data and see what happens.

The code is a bit more compact than your original, but still not what I consider highly readable. I had to sacrifice some readability to write this in an optimized, procedural way:

function merge(arrays) {
 var indexes = Array(arrays.length).fill(0);
 var result = [];
 var minIndex, minValue, minObject; // for minByOffset method
 const totalLength = arrays.reduce((sum, arr) => sum + arr.length, 0);
 const infiniteOffset = {offset: Infinity};
 const nextUnusedElm = (arr,i) => arr[indexes[i]] || infiniteOffset;
 while (result.length < totalLength) {
 var candidates = arrays.map(nextUnusedElm);
 minByOffset(candidates);
 indexes[minIndex]++;
 result.push(minObject);
 }
 return result;
 function minByOffset(arr) {
 minValue = Infinity;
 arr.forEach((x,i) => {
 if (x.offset >= minValue) return;
 minIndex = i;
 minObject = x;
 minValue = x.offset;
 });
 }
}

A final optimization you could make (again, it may or may not have much of an effect) is to remove an array once all of its elements have been used up. That is, restructure the code to avoid some unnecessary comparisons. In your code, it would avoid:

if (state[i] >= arrays[i].length) {

In my version, it would avoid:

arr[indexes[i]] || infiniteOffset;

That is, an extra array lookup and a comparison inside the loop. These are fast operations, so the optimization may not matter...

The built-in facilities should take care of this for you.

You can just merge them with concat and then use Array's sort method, which takes an optional custom comparator (in your case, "compare by offset"). This should run in O(n log(n)):

function merge(arrays) {
 return arrays.reduce((m,x) => m.concat(x), [])
 .sort((a,b) => a.offset - b.offset);
}

NOTE: This doesn't take full advantage of the fact that the individual arrays are already sorted (which will give an O(n) solution), but it's so much simpler it's worth checking if it meets your needs before optimizing further.

UPDATE

I went ahead and did a rewrite with some guesses at micro-optimizations that might help (or might not). Try it on your sample data and see what happens.

The code is a bit more compact than your original, but still not what I consider highly readable. I had to sacrifice some readability to write this in an optimized, procedural way:

function merge(arrays) {
 var indexes = Array(arrays.length).fill(0);
 var result = [];
 var minIndex, minValue, minObject; // for minByOffset method
 const totalLength = arrays.reduce((sum, arr) => sum + arr.length, 0);
 const infiniteOffset = {offset: Infinity};
 const nextUnusedElm = (arr,i) => arr[indexes[i]] || infiniteOffset;
 while (result.length < totalLength) {
 var candidates = arrays.map(nextUnusedElm);
 minByOffset(candidates);
 indexes[minIndex]++;
 result.push(minObject);
 }
 return result;
 function minByOffset(arr) {
 minValue = Infinity;
 arr.forEach((x,i) => {
 if (x.offset >= minValue) return;
 minIndex = i;
 minObject = x;
 minValue = x.offset;
 });
 }
}

A final optimization you could make (again, it may or may not have much of an effect) is to remove an array once all of its elements have been used up. That is, restructure the code to avoid some unnecessary comparisons. In your code, it would avoid:

if (state[i] >= arrays[i].length) {

In my version, it would avoid:

arr[indexes[i]] || infiniteOffset;

That is, an extra array lookup and a comparison inside the loop. These are fast operations, so the optimization may not matter...

added 16 characters in body
Source Link
Jonah
  • 4.4k
  • 15
  • 23

The built-in facilities should take care of this for you.

You can just merge them with concat and then use Array's sort method, which takes an optional custom comparator (in your case, "compare by offset"). This should run in O(n log(n)):

function merge(arrays) {
 return arrays.reduce((m,x) => m.concat(x), [])
 .sort((a,b) => a.offset - b.offset);
}

NOTE: This doesn't take full advantage of the fact that the individual arrays are already sorted (which will give an O(n) solution), but it's so much simpler it's worth checking if it meets your needs before optimizing further.

##UPDATE

I went ahead and did a rewrite with some guesses at micro-optimizations that might help (or might not). Try it on your sample data and see what happens.

The code is a bit more compact than your original, but still not what I consider highly readable. I had to sacrifice some readability to write this in an optimized, procedural way:

function merge(arrays) {
 var indexes = Array(arrays.length).fill(0);
 var result = [];
 var minIndex, minValue, minObject; // for minByOffset method
 const totalLength = arrays.reduce((sum, arr) => sum + arr.length, 0);
 const infiniteOffset = {offset: Infinity};
 const nextUnusedElm = (arr,i) => arr[indexes[i]] || infiniteOffset;
 while (result.length < totalLength) {
 var candidates = arrays.map(nextUnusedElm);
 minByOffset(candidates);
 indexes[minIndex]++;
 result.push(minObject);
 }
 return result;
 function minByOffset(arr) {
 minValue = Infinity;
 arr.forEach((x,i) => {
 if (x.offset >= minValue) return;
 minIndex = i;
 minObject = x;
 minValue = x.offset;
 });
 }
}

A final optimization you could make (again, it may or may not have much of an effect) is to remove an array once all of its elements have been used up. That is, restructure the code to avoid some unnecessary comparisons. In your code, it would avoid:

if (state[i] >= arrays[i].length) {

In my version, it would avoid:

arr[indexes[i]] || infiniteOffset;

That is, an extra array lookup and a comparison inside the loop. These are fast operations, so the optimization may not matter...

The built-in facilities should take care of this for you.

You can just merge them with concat and then use Array's sort method, which takes an optional custom comparator (in your case, "compare by offset"). This should run in O(n log(n)):

function merge(arrays) {
 return arrays.reduce((m,x) => m.concat(x), [])
 .sort((a,b) => a.offset - b.offset);
}

NOTE: This doesn't take full advantage of the fact that the individual arrays are already sorted (which will give an O(n) solution), but it's so much simpler it's worth checking if it meets your needs before optimizing further.

##UPDATE

I went ahead and did a rewrite with some guesses at micro-optimizations that might help (or might not). Try it on your sample data and see what happens.

The code is a bit more compact than your original, but still not what I consider highly readable. I had to sacrifice some readability to write this in an optimized, procedural way:

function merge(arrays) {
 var indexes = Array(arrays.length).fill(0);
 var result = [];
 var minIndex, minValue, minObject; // for minByOffset method
 const totalLength = arrays.reduce((sum, arr) => sum + arr.length, 0);
 const infiniteOffset = {offset: Infinity};
 const nextUnusedElm = (arr,i) => arr[indexes[i]] || infiniteOffset;
 while (result.length < totalLength) {
 var candidates = arrays.map(nextUnusedElm);
 minByOffset(candidates);
 indexes[minIndex]++;
 result.push(minObject);
 }
 return result;
 function minByOffset(arr) {
 minValue = Infinity;
 arr.forEach((x,i) => {
 if (x.offset >= minValue) return;
 minIndex = i;
 minObject = x;
 minValue = x.offset;
 });
 }
}

A final optimization you could make (again, it may or may not have much of an effect) is to remove an array once all of its elements have been used up. That is, restructure the code to avoid some unnecessary comparisons. In your code, it would avoid:

if (state[i] >= arrays[i].length) {

In my version, it would avoid:

arr[indexes[i]] || infiniteOffset;

That is, an extra array lookup and a comparison. These are fast operations, so the optimization may not matter...

The built-in facilities should take care of this for you.

You can just merge them with concat and then use Array's sort method, which takes an optional custom comparator (in your case, "compare by offset"). This should run in O(n log(n)):

function merge(arrays) {
 return arrays.reduce((m,x) => m.concat(x), [])
 .sort((a,b) => a.offset - b.offset);
}

NOTE: This doesn't take full advantage of the fact that the individual arrays are already sorted (which will give an O(n) solution), but it's so much simpler it's worth checking if it meets your needs before optimizing further.

##UPDATE

I went ahead and did a rewrite with some guesses at micro-optimizations that might help (or might not). Try it on your sample data and see what happens.

The code is a bit more compact than your original, but still not what I consider highly readable. I had to sacrifice some readability to write this in an optimized, procedural way:

function merge(arrays) {
 var indexes = Array(arrays.length).fill(0);
 var result = [];
 var minIndex, minValue, minObject; // for minByOffset method
 const totalLength = arrays.reduce((sum, arr) => sum + arr.length, 0);
 const infiniteOffset = {offset: Infinity};
 const nextUnusedElm = (arr,i) => arr[indexes[i]] || infiniteOffset;
 while (result.length < totalLength) {
 var candidates = arrays.map(nextUnusedElm);
 minByOffset(candidates);
 indexes[minIndex]++;
 result.push(minObject);
 }
 return result;
 function minByOffset(arr) {
 minValue = Infinity;
 arr.forEach((x,i) => {
 if (x.offset >= minValue) return;
 minIndex = i;
 minObject = x;
 minValue = x.offset;
 });
 }
}

A final optimization you could make (again, it may or may not have much of an effect) is to remove an array once all of its elements have been used up. That is, restructure the code to avoid some unnecessary comparisons. In your code, it would avoid:

if (state[i] >= arrays[i].length) {

In my version, it would avoid:

arr[indexes[i]] || infiniteOffset;

That is, an extra array lookup and a comparison inside the loop. These are fast operations, so the optimization may not matter...

added 1735 characters in body
Source Link
Jonah
  • 4.4k
  • 15
  • 23

The built-in facilities should take care of this for you.

You can just merge them with concat and then use Array's sort method, which takes an optional custom comparator (in your case, "compare by offset"). This should run in O(n log(n)):

function merge(arrays) {
 return arrays.reduce((m,x) => m.concat(x), [])
 .sort((a,b) => a.offset - b.offset);
}

NOTE: This doesn't take full advantage of the fact that the individual arrays are already sorted (which will give an O(n) solution), but it's so much simpler it's worth checking if it meets your needs before optimizing further.

##UPDATE

I went ahead and did a rewrite with some guesses at micro-optimizations that might help (or might not). Try it on your sample data and see what happens.

The code is a bit more compact than your original, but still not what I consider highly readable. I had to sacrifice some readability to write this in an optimized, procedural way:

function merge(arrays) {
 var indexes = Array(arrays.length).fill(0);
 var result = [];
 var minIndex, minValue, minObject; // for minByOffset method
 const totalLength = arrays.reduce((sum, arr) => sum + arr.length, 0);
 const infiniteOffset = {offset: Infinity};
 const nextUnusedElm = (arr,i) => arr[indexes[i]] || infiniteOffset;
 while (result.length < totalLength) {
 var candidates = arrays.map(nextUnusedElm);
 minByOffset(candidates);
 indexes[minIndex]++;
 result.push(minObject);
 }
 return result;
 function minByOffset(arr) {
 minValue = Infinity;
 arr.forEach((x,i) => {
 if (x.offset >= minValue) return;
 minIndex = i;
 minObject = x;
 minValue = x.offset;
 });
 }
}

A final optimization you could make (again, it may or may not have much of an effect) is to remove an array once all of its elements have been used up. That is, restructure the code to avoid some unnecessary comparisons. In your code, it would avoid:

if (state[i] >= arrays[i].length) {

In my version, it would avoid:

arr[indexes[i]] || infiniteOffset;

That is, an extra array lookup and a comparison. These are fast operations, so the optimization may not matter...

The built-in facilities should take care of this for you.

You can just merge them with concat and then use Array's sort method, which takes an optional custom comparator (in your case, "compare by offset"). This should run in O(n log(n)):

function merge(arrays) {
 return arrays.reduce((m,x) => m.concat(x), [])
 .sort((a,b) => a.offset - b.offset);
}

NOTE: This doesn't take full advantage of the fact that the individual arrays are already sorted (which will give an O(n) solution), but it's so much simpler it's worth checking if it meets your needs before optimizing further.

The built-in facilities should take care of this for you.

You can just merge them with concat and then use Array's sort method, which takes an optional custom comparator (in your case, "compare by offset"). This should run in O(n log(n)):

function merge(arrays) {
 return arrays.reduce((m,x) => m.concat(x), [])
 .sort((a,b) => a.offset - b.offset);
}

NOTE: This doesn't take full advantage of the fact that the individual arrays are already sorted (which will give an O(n) solution), but it's so much simpler it's worth checking if it meets your needs before optimizing further.

##UPDATE

I went ahead and did a rewrite with some guesses at micro-optimizations that might help (or might not). Try it on your sample data and see what happens.

The code is a bit more compact than your original, but still not what I consider highly readable. I had to sacrifice some readability to write this in an optimized, procedural way:

function merge(arrays) {
 var indexes = Array(arrays.length).fill(0);
 var result = [];
 var minIndex, minValue, minObject; // for minByOffset method
 const totalLength = arrays.reduce((sum, arr) => sum + arr.length, 0);
 const infiniteOffset = {offset: Infinity};
 const nextUnusedElm = (arr,i) => arr[indexes[i]] || infiniteOffset;
 while (result.length < totalLength) {
 var candidates = arrays.map(nextUnusedElm);
 minByOffset(candidates);
 indexes[minIndex]++;
 result.push(minObject);
 }
 return result;
 function minByOffset(arr) {
 minValue = Infinity;
 arr.forEach((x,i) => {
 if (x.offset >= minValue) return;
 minIndex = i;
 minObject = x;
 minValue = x.offset;
 });
 }
}

A final optimization you could make (again, it may or may not have much of an effect) is to remove an array once all of its elements have been used up. That is, restructure the code to avoid some unnecessary comparisons. In your code, it would avoid:

if (state[i] >= arrays[i].length) {

In my version, it would avoid:

arr[indexes[i]] || infiniteOffset;

That is, an extra array lookup and a comparison. These are fast operations, so the optimization may not matter...

added 36 characters in body
Source Link
Jonah
  • 4.4k
  • 15
  • 23
Loading
Source Link
Jonah
  • 4.4k
  • 15
  • 23
Loading
default

AltStyle によって変換されたページ (->オリジナル) /