4

I'm trying to write good code to make extrapolation on economic data and I have a hard time finding an efficient/ingenious way to do it. (to have a nice code that can be used in a different way later with ease)

Example of economic data split on an Country => year => value structure

['FR'] => [ 2008 => 50, 2009 => 100],['US'] => [ 2008 => 70, 2009 => 20]

However sometime additionnal classifications goes in : So for example ['export'] => precedent_array ['import'] => another one

I'm asked to calculate average progression coefficient per year per classification (squishing country key)

(['export'] =>[2008] => 1.1)

, then to apply it to the simpler country => year => value form, then calculate relative error on existing data.

In the end, I will have to calculate another 'per-year' coefficient from another classification, again calculate the relative error on an application, and to finally take the best one of the twos.

To resume: I have a various Key level that could go away due to an average, and still, I will need to apply/use this new data where this key_level could be present (or another one, but I think it's not the main problem)

So I thought I could try a Decorator pattern but instead of having only one decorator, each object would have an array of Decorator who would have the same key-level minus one.

But changing fundamental of design pattern sounds not good to me (considering my level) So here I am asking for suggestions: x I've done this before but even with 2 iterations the code, while working, was really painful to write to read&change.

Thanks for help

Adam Miklosi
1351 gold badge2 silver badges7 bronze badges
asked Oct 9, 2018 at 17:08
3
  • 2
    Writing software is about writing code, not stitching together software patterns. Data structures are chosen based on their performance characteristics. Your code doesn't have to be "ingenious;" it just has to be maintainable. Commented Oct 9, 2018 at 18:57
  • Couldn't you use SQL for that? Seems like you are looking at finding a solution to aggregating flat metrics in various ways, based on different groupings. That's quite simple and efficient to perform with a SQL database. Commented Oct 11, 2018 at 13:26
  • Depends on what you mean by aggregating, but maybe yes. Thing is, while i could see how SQL row data format could ease the things, i fear it is totally nullified by the syntax, the lack of tools to debug etc. I'm currenctly using PHP while it's clearly not the best choice, only because yes, i want to update/insert those results in SQL in the end and those numbers come from a database. But i was in pain to write simple things in Procedural MySQL. So for me it's kinda a no-no. Though i can be wrong Commented Oct 11, 2018 at 15:10

0

Know someone who can answer? Share a link to this question via email, Twitter, or Facebook.

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.