-
Notifications
You must be signed in to change notification settings - Fork 72
Experiment: Use native php tests #956
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Fixes a parse error when passing multiple argument spreads such as `foo(...$bar, ...$baz)`. Adds extra checks for function definitions with variadic parameters, followed by any other parameters. Fixes #946
Fix error on variadic function calls.
chore: upgrade dependencies
Hi @MaartenStaa
Wow, this sounds really exciting! I’ve been wanting to reduce the amount of boilerplate in our testing setup since a while, similar to the .phpt files from php-src - but this takes it two steps further. Unfortunately I probably won’t find time to take a closer look at this until somewhen next week. For now I just wanted to share my excitement and encourage you to keep pushing this forward 💯
@MaartenStaa In the prettier PHP plugin, we're parsing plain php files for snapshot testing using jest setupFiles, thereby avoiding the code generation step: https://github.com/prettier/plugin-php/blob/main/tests_config/run_spec.js
Would something like that work here as well? It would be nice if we could just use the .phpt
files directly without the need to generate intermediate testing code.
@MaartenStaa this is really cool, should we just create a test blacklist for failing tests, so we can work towards merging this, and them working on fixing all the cases where it fails?
@czosel I had an idea the other day. A decent number of the bug reports on this project are due to small correctness issues. So I thought, is there a way to get ahead of the reports, and find them ourselves?
Well, the PHP project itself has many tests (
.phpt
files), so I set up this experiment. I pull in the php-src repository as a submodule, and made a script that generates Jest unit tests from the PHP tests, marking which should fail and which not. This branch and PR are the result.As you can see in the commits, I've already tackled several correctness issues, but there are still plenty of failing tests. I'm jumping the gun and opening the PR to get early feedback for the idea and approach.
List of still failing tests:
Note: it's possible that some of these are beyond the scope of this project, or are cases where the test generator is incorrectly set to expect the test to fail, or doesn't mark a test as one that should fail where it should.