2
\$\begingroup\$

I am trying to full one dimensional array in two dimensional array in Java.I did it using this way, is there another way better than this? .

public double[][] getResult(double[] data, int rowSize) {
int columnSize = data.length;
double[][] result = new double[columnSize][rowSize];
for (int i = 0; i < columnSize; i++) {
 result[i][0] = data[i];
}
return result;
}
200_success
146k22 gold badges190 silver badges478 bronze badges
asked Jan 9, 2016 at 1:50
\$\endgroup\$
1
  • \$\begingroup\$ I think what you are doing is pretty much the best way honestly, I suppose you may be able to use Array.copyOf or system.arrayCopy or whatever the array copy commands are for Java but this is clean since you are translating from 1d to 2d \$\endgroup\$ Commented Jan 9, 2016 at 5:10

1 Answer 1

2
\$\begingroup\$

For the most part, what you are doing is fine (like @Ashwin commented on your question).

About the only thing I really don't like is the name of the function getResult. That name is just really useless. How about convert2D or something? Your other variable names are well put together, and make sense.

Another issue you may run in to is if someone specifies bad input values. Your code will fail with a NoSuchElementException if someone sets a rowSize of 0.

From a performance perspective, I expect that your code is pretty much as fast as it can be too - in a single-threaded process.

Using a Java 8 stream you can possibly make the process a bit more parallel, but it would be more complicated to read. Still, I thought it would be an interesting thing to see..... though, in a production system, I would be happy to see your code instead (i.e. the following code is just a thought, not a recommendation):

public static double[][] convert2D(double[] data, int rowSize) {
 return IntStream.range(0, data.length)
 .parallel()
 .mapToObj(i -> {
 double[] row = new double[rowSize];
 row[0] = data[i];
 return row;
 })
 .toArray(s -> new double[s][]);
}

The above has the advantage that each row may be created in a different thread, and "all CPUs" on your system will be creating rows. I did some performance benchmarks, and, if the rowSize is small (10 or so) then your code is faster. If the row size starts getting larger (like 1000 or so) then the parallel code starts getting faster.

Even though the stream code can sometimes be faster, I would still probably prefer to use your code.

If you want raw numbers (Note the unit for the 10-size row is MicroSeconds, and for 1000 size row is MilliSeconds), this is the timing for row-size of 10 for 1000 rows, run 1000 times:

Task 1d2d -> Single: (Unit: MICROSECONDS)
 Count : 1000 Average : 62.9370
 Fastest : 41.4460 Slowest : 1383.5230
 95Pctile : 95.9190 99Pctile : 124.3400
 TimeBlock : 91.553 72.914 55.230 65.829 66.551 48.998 60.405 49.428 63.745 54.717
 Histogram : 887 109 3 0 0 1
Task 1d2d -> Stream: (Unit: MICROSECONDS)
 Count : 1000 Average : 145.3900
 Fastest : 20.9210 Slowest : 85596.0480
 95Pctile : 115.2600 99Pctile : 317.3620
 TimeBlock : 983.900 101.962 61.807 49.069 57.070 36.698 32.841 34.045 45.761 50.754
 Histogram : 519 339 116 17 4 0 4 0 0 0 0 1

This is the timing for row-size 1000 for 1000 rows run 1000 times:

Task 1d2d -> Single: (Unit: MILLISECONDS)
 Count : 1000 Average : 1.3570
 Fastest : 1.1601 Slowest : 7.7698
 95Pctile : 2.5799 99Pctile : 4.2958
 TimeBlock : 2.175 1.387 1.284 1.225 1.217 1.244 1.253 1.314 1.260 1.211
 Histogram : 948 44 8
Task 1d2d -> Stream: (Unit: MILLISECONDS)
 Count : 1000 Average : 0.9919
 Fastest : 0.5680 Slowest : 92.7422
 95Pctile : 2.2405 99Pctile : 4.0231
 TimeBlock : 2.672 1.074 0.742 0.811 0.763 0.773 0.762 0.733 0.772 0.817
 Histogram : 932 19 45 3 0 0 0 1
answered Jan 9, 2016 at 13:21
\$\endgroup\$
3
  • \$\begingroup\$ thank you :) . please, what did you use for performance benchmarks ? \$\endgroup\$ Commented Jan 13, 2016 at 2:08
  • 1
    \$\begingroup\$ @EslamAli - a few of us here on Code Review contributed to github.com/rolfl/MicroBench - it runs, benchmarks, and reports performance metrics. It also has a feature for "fitting" a time-complexity curve on to a function for different input sizes. \$\endgroup\$ Commented Jan 13, 2016 at 2:45
  • \$\begingroup\$ thank you , it's great work ! .but i'm wondering how is the single work faster for row-size 1000 than for row-size of 10 ? \$\endgroup\$ Commented Jan 13, 2016 at 4:44

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.