I implemented the heaviest stone algorithm, but I am not sure about its time complexity. I'm doing sort which is O(nlogn), but it' inside a while loop.
Problem:
We have a collection of stones, each stone has a positive integer weight.
Each turn, we choose the two heaviest stones and smash them together. Suppose the stones have weights x and y with x <= y. The result of this smash is:
If x == y, both stones are totally destroyed; If x != y, the stone of weight x is totally destroyed, and the stone of weight y has new weight y-x. At the end, there is at most 1 stone left. Return the weight of this stone (or 0 if there are no stones left.)
Example:
Input: [2,7,4,1,8,1]
Output: 1
Explanation:
We combine 7 and 8 to get 1 so the array converts to [2,4,1,1,1] then,
we combine 2 and 4 to get 2 so the array converts to [2,1,1,1] then,
we combine 2 and 1 to get 1 so the array converts to [1,1,1] then,
we combine 1 and 1 to get 0 so the array converts to [1] then that's the value of last stone.
My solution:
var lastStoneWeight = function(stones) {
if(!stones || stones.length === 0) return 0;
if(stones.length === 1) return stones[0];
stones.sort((a,b) => b-a);
while(stones.length > 1) {
const x = stones[0];
const y = stones[1];
stones.splice(0, 2);
if(x !== y) {
stones.push(x-y);
if(stones.length === 1) return stones[0];
stones.sort((a,b) => b-a);
}
}
return stones.length > 0 ? stones[0] : 0;
};
By the way, is there a way to have a better performance? Maybe not sorting?
Thanks
-
1\$\begingroup\$ If you're worried about efficiency, couldn't you simply go through the entire list and find the two heaviest stones instead of sorting everytime? You can create a new function for this, and I think it should give you a significant increase in performance. \$\endgroup\$cliesens– cliesens2020年08月08日 00:03:44 +00:00Commented Aug 8, 2020 at 0:03
-
1\$\begingroup\$ For this case it approximates $$O(n^2\ln(n))$$ (as worst case in general) because you are sorting them in the while loop. \$\endgroup\$sɪʒɪhɪŋ βɪstɦa kxɐll– sɪʒɪhɪŋ βɪstɦa kxɐll2020年08月08日 01:38:42 +00:00Commented Aug 8, 2020 at 1:38
-
1\$\begingroup\$ O(n^2 * log(n)) is yours. O(n^2) if you implement what @cliesens proposed. O(n * log(n)) if you are willing to implement heap. \$\endgroup\$slepic– slepic2020年08月08日 05:15:59 +00:00Commented Aug 8, 2020 at 5:15
-
1\$\begingroup\$ The data structure you want is a priority queue, en.wikipedia.org/wiki/Priority_queue, this essentially solves the problem of having to repeatedly sort the collection. Unfortunately js doesn't come with priority queues built in, the lack of good built in datastructures is an unfortunate fact about the language imo, so you'll have to implement it yourself or use a dependency. \$\endgroup\$Countingstuff– Countingstuff2020年08月08日 22:57:05 +00:00Commented Aug 8, 2020 at 22:57
-
1\$\begingroup\$ I added a C++ solution here: codereview.stackexchange.com/questions/247668/… It's a bit more complex and the crucial parts are implemented by the C++'s standard library, but I'm sure you can find code of the 3 functions on wiki. Map won't help you in any way here. For the followup problem, it looks like NP hard, because you can try all permutations of the input to find the smallest result. \$\endgroup\$slepic– slepic2020年08月09日 09:29:57 +00:00Commented Aug 9, 2020 at 9:29
1 Answer 1
Issue(s)
- You sort descending, but then take and assign the heaviest stone to
x
and the second heaviest stone toy
whereas in your problem you state "...stones have weights x and y with x <= y". - Code isn't as DRY (Don't Repeat Yourself) as it could be.
Suggestions
- Provide
stones
a default/initial empty array value. - Sort descending and assign
const y = stones[0];
andconst x = stones[1];
and push new valuestones.push(y - x);
to match problem. (Or normal sort ascending and take from array end, more on this later) - Use array destructuring to assign to
x
andy
since array::splice returns an array of spliced out values. - Sort once at the beginning of each iteration.
- Remove extraneous array length checks, let the while-loop condition handle it.
Code
const lastStoneWeight = (stones = []) => {
if (!stones || !stones.length) return 0;
while (stones.length > 1) {
stones.sort((a, b) => a - b);
const [x, y] = stones.splice(-2);
if (x !== y) stones.push(y - x);
}
return stones[0] || 0;
};
If you can use Optional Chaining and Nullish Coalescing
const lastStoneWeight = (stones = []) => {
if (!stones?.length) return 0;
while (stones.length > 1) {
stones.sort((a, b) => a - b);
const [x, y] = stones.splice(-2);
if (x !== y) stones.push(y - x);
}
return stones[0] ?? 0;
};
Here I use the default sort (ascending) and splice off the last two elements, removes need to shift entire forward 2 indices.
Using a Heap Data Structure
Since you ask about improved performance and the going suggestion is to use a heap/priority queue. This is a very similar implementation.
const lastStoneWeightHeap = (stones = []) => {
if (!stones?.length) return 0;
const heap = new PriorityQueue(); // <-- uses equivalent comparator as array::sort
stones.forEach((stone) => heap.enq(stone)); // <-- populate heap
while (heap.size() > 1) {
const y = heap.deq();
const x = heap.deq();
if (x !== y) heap.enq(y - x);
}
return heap.size() ? heap.deq() : 0;
};
t1
is regular algorithm
t2
is version using heap/priority queue data structure
10 iterations x 10000 runs
# Elements t0 avg t1 avg
1 8 0.00363 0.00106
2 16 0.01036 0.00157
3 32 0.01781 0.00224
4 64 0.09148 0.00432
5 128 0.22560 0.00944
6 256 0.56833 0.01618
7 512 2.37584 0.06091
8 1024 8.78741 0.12614
9 2048 34.29092 0.29697
10 4096 130.50169 0.63872
Notes:
- Time is in milliseconds
https://www.npmjs.com/package/priorityqueuejs
Conclusion
Using a heap data structure is orders of magnitude an improvement. With the naive implementation it is clearly at least an \$O(n^2)\$ complexity as each time the dataset size doubles (2x) the runtime roughly quadruples (~4x) whereas the implementation using a heap roughly only doubles (~2x) the runtime with each doubling of the dataset.
Performance Benchmarking
performanceBenchmark.js
const measurePerf = (fn, data, runs = 1e3) =>
[...Array(runs).keys()]
.map(() => {
const start = performance.now();
fn([...data]);
const end = performance.now();
return end - start;
})
.reduce((total, current) => total + current) / runs;
const toFixed = (val, fixed) =>
Number.isFinite(val) ? Number(val).toFixed(fixed) : val;
export const benchmark = async ({
functions = [],
createRunData,
iterations = 5,
runs = 1e3,
logIntermediateResults
}) => {
logIntermediateResults && console.log(`${iterations} x ${runs}`);
const results = [];
logIntermediateResults &&
console.log(
`\t# Elements\t${functions.map((_, i) => `t${i} avg`).join("\t")}`
);
for (let i = 0; i < iterations; i++) {
const data = createRunData(i);
const res = await Promise.all(
functions.map((fn) => measurePerf(fn, data, runs))
);
results.push(res);
logIntermediateResults &&
console.log(
`${i + 1}\t${data.length}\t${res
.map((t) => `${toFixed(t, 5)}`)
.join("\t")}`
);
}
return results;
};
Setup & benchmark
const ITERATIONS = 10;
const RUNS = 1e4;
const SEED = 8;
const functions = [
lastStoneWeight,
lastStoneWeightHeap,
];
const createRunData = (i) => {
const dataLength = SEED << i;
const stones = [...Array(dataLength).keys()].map(() =>
Math.floor(Math.random() * dataLength)
);
return stones;
};
benchmark({
functions,
createRunData,
iterations: ITERATIONS,
runs: RUNS,
logIntermediateResults: true
});
Extended heap implementation benchmark
15 x 10000
# Elements t0 avg
1 8 0.00100
2 16 0.00171
3 32 0.00242
4 64 0.00434
5 128 0.00933
6 256 0.01825
7 512 0.05681
8 1024 0.13715
9 2048 0.27621
10 4096 0.59631
11 8192 1.24577
12 16384 4.75092
13 32768 6.09799
14 65536 13.07677
15 131072 28.88058
-
1
-
\$\begingroup\$ @Kruga Thanks, I added a test case in the sandbox for an array with values like that and updated implementations. \$\endgroup\$Drew Reese– Drew Reese2020年08月31日 10:29:09 +00:00Commented Aug 31, 2020 at 10:29