Skip to main content
Software Engineering

Return to Answer

replaced http://stackoverflow.com/ with https://stackoverflow.com/
Source Link

If you make only one pick, you're likely doing it wrong. :) Other than that this is essentially a random sampling, a pretty respectable approach to gather information in cases like you describe. For more scientific details on how this could be done, study about Monte Carlo method.

Regarding things to look for, consider finding, studying and tailoring to your specific needs some tried and true checklist.


Some other aspects worth considering when evaluating a project are listed below (check list summarized from my past experience check list summarized from my past experience):

  • Releases (along with changelog), versioning and publishing discipline
    It's generally much easier to investigate issues when one can tell bug was found in release 1.2.3, avaiable for download at some URL than oh two years ago someone sent me a mail with attached binary.

  • Developer documentation, API reference and code examples
    Helps to avoid efforts wasted on reinventing the wheel and learning basics by trial and error.
    Note these can be also randomly sampled for a quick study.

  • Bug tracking
    If there's no tracking at all, it's a huge red flag; if there's one, consider quickly checking it using the same random sampling approach as you do for source code

  • Positive feedback
    Find out about project users and try to do a random sampling study of their feedback.

If you make only one pick, you're likely doing it wrong. :) Other than that this is essentially a random sampling, a pretty respectable approach to gather information in cases like you describe. For more scientific details on how this could be done, study about Monte Carlo method.

Regarding things to look for, consider finding, studying and tailoring to your specific needs some tried and true checklist.


Some other aspects worth considering when evaluating a project are listed below (check list summarized from my past experience):

  • Releases (along with changelog), versioning and publishing discipline
    It's generally much easier to investigate issues when one can tell bug was found in release 1.2.3, avaiable for download at some URL than oh two years ago someone sent me a mail with attached binary.

  • Developer documentation, API reference and code examples
    Helps to avoid efforts wasted on reinventing the wheel and learning basics by trial and error.
    Note these can be also randomly sampled for a quick study.

  • Bug tracking
    If there's no tracking at all, it's a huge red flag; if there's one, consider quickly checking it using the same random sampling approach as you do for source code

  • Positive feedback
    Find out about project users and try to do a random sampling study of their feedback.

If you make only one pick, you're likely doing it wrong. :) Other than that this is essentially a random sampling, a pretty respectable approach to gather information in cases like you describe. For more scientific details on how this could be done, study about Monte Carlo method.

Regarding things to look for, consider finding, studying and tailoring to your specific needs some tried and true checklist.


Some other aspects worth considering when evaluating a project are listed below (check list summarized from my past experience):

  • Releases (along with changelog), versioning and publishing discipline
    It's generally much easier to investigate issues when one can tell bug was found in release 1.2.3, avaiable for download at some URL than oh two years ago someone sent me a mail with attached binary.

  • Developer documentation, API reference and code examples
    Helps to avoid efforts wasted on reinventing the wheel and learning basics by trial and error.
    Note these can be also randomly sampled for a quick study.

  • Bug tracking
    If there's no tracking at all, it's a huge red flag; if there's one, consider quickly checking it using the same random sampling approach as you do for source code

  • Positive feedback
    Find out about project users and try to do a random sampling study of their feedback.

added 64 characters in body
Source Link
gnat
  • 20.5k
  • 29
  • 117
  • 310

If you make only one pick, you're likely doing it wrong. :) Other than that this is essentially a random sampling, a pretty respectable approach to gather information in cases like you describe. For more scientific details on how this could be done, study about Monte Carlo method.

Regarding things to look for, consider finding, studying and tailoring to your specific needs some tried and true checklist.


Some other aspects worth considering when evaluating a project are listed below (check list summarized from my past experience):

  • Releases (along with changelog), versioning and publishing discipline
    It's generally much easier to investigate issues when one can tell bug was found in release 1.2.3, avaiable for download at some URL than oh two years ago someone sent me a mail with attached binary.

  • Developer documentation, API reference and code examples
    Helps to avoid efforts wasted on reinventing the wheel and learning basics by trial and error.
    Note these can be also randomly sampled for a quick study.

  • Bug tracking
    If there's no tracking at all, it's a huge red flag; if there's one, consider quickly checking it using the same random sampling approach as you do for source code

  • Positive feedback
    Find out about project users and try to do a random sampling study of their feedback.

If you make only one pick, you're likely doing it wrong. :) Other than that this is essentially a random sampling, a pretty respectable approach to gather information in cases like you describe. For more scientific details on how this could be done, study about Monte Carlo method.

Regarding things to look for, consider finding, studying and tailoring to your specific needs some tried and true checklist.


Some other aspects worth considering when evaluating a project are listed below (check list summarized from my past experience):

  • Releases (along with changelog), versioning and publishing discipline
    It's generally much easier to investigate issues when one can tell bug was found in release 1.2.3, avaiable for download at some URL than oh two years ago someone sent me a mail with attached binary.

  • Developer documentation, API reference and code examples
    Helps to avoid efforts wasted on reinventing the wheel and learning basics by trial and error.

  • Bug tracking
    If there's no tracking at all, it's a huge red flag; if there's one, consider quickly checking it using the same random sampling approach as you do for source code

  • Positive feedback
    Find out about project users and try to do a random sampling study of their feedback.

If you make only one pick, you're likely doing it wrong. :) Other than that this is essentially a random sampling, a pretty respectable approach to gather information in cases like you describe. For more scientific details on how this could be done, study about Monte Carlo method.

Regarding things to look for, consider finding, studying and tailoring to your specific needs some tried and true checklist.


Some other aspects worth considering when evaluating a project are listed below (check list summarized from my past experience):

  • Releases (along with changelog), versioning and publishing discipline
    It's generally much easier to investigate issues when one can tell bug was found in release 1.2.3, avaiable for download at some URL than oh two years ago someone sent me a mail with attached binary.

  • Developer documentation, API reference and code examples
    Helps to avoid efforts wasted on reinventing the wheel and learning basics by trial and error.
    Note these can be also randomly sampled for a quick study.

  • Bug tracking
    If there's no tracking at all, it's a huge red flag; if there's one, consider quickly checking it using the same random sampling approach as you do for source code

  • Positive feedback
    Find out about project users and try to do a random sampling study of their feedback.

added 953 characters in body
Source Link
gnat
  • 20.5k
  • 29
  • 117
  • 310

If you make only one pick, you're likely doing it wrong. :) Other than that this is essentially a random sampling, a pretty respectable approach to gather information in cases like you describe. For more scientific details on how this could be done, study about Monte Carlo method.

Regarding things to look for, consider finding, studying and tailoring to your specific needs some tried and true checklist.


Some other aspects worth considering when evaluating a project are listed below (check list summarized from my past experience ):

  • Releases (along with changelog), versioning and publishing discipline
    It's generally much easier to investigate issues when one can tell bug was found in release 1.2.3, avaiable for download at some URL than oh two years ago someone sent me a mail with attached binary.

  • Developer documentation, API reference and code examples
    Helps to avoid efforts wasted on reinventing the wheel and learning basics by trial and error.

  • Bug tracking
    If there's no tracking at all, it's a huge red flag; if there's one, consider quickly checking it using the same random sampling approach as you do for source code

  • Positive feedback
    Find out about project users and try to do a random sampling study of their feedback.

If you make only one pick, you're likely doing it wrong. :) Other than that this is essentially a random sampling, a pretty respectable approach to gather information in cases like you describe. For more scientific details on how this could be done, study about Monte Carlo method.

Regarding things to look for, consider finding, studying and tailoring to your specific needs some tried and true checklist.

If you make only one pick, you're likely doing it wrong. :) Other than that this is essentially a random sampling, a pretty respectable approach to gather information in cases like you describe. For more scientific details on how this could be done, study about Monte Carlo method.

Regarding things to look for, consider finding, studying and tailoring to your specific needs some tried and true checklist.


Some other aspects worth considering when evaluating a project are listed below (check list summarized from my past experience ):

  • Releases (along with changelog), versioning and publishing discipline
    It's generally much easier to investigate issues when one can tell bug was found in release 1.2.3, avaiable for download at some URL than oh two years ago someone sent me a mail with attached binary.

  • Developer documentation, API reference and code examples
    Helps to avoid efforts wasted on reinventing the wheel and learning basics by trial and error.

  • Bug tracking
    If there's no tracking at all, it's a huge red flag; if there's one, consider quickly checking it using the same random sampling approach as you do for source code

  • Positive feedback
    Find out about project users and try to do a random sampling study of their feedback.

Source Link
gnat
  • 20.5k
  • 29
  • 117
  • 310
Loading

AltStyle によって変換されたページ (->オリジナル) /