Changelog: 2022
Released: 2022年02月18日
Codename: (Vishwa)
The version of a few dependencies have been bumped to the latest versions (dependent HAPI modules listed in brackets):
javax. to jakarta., it is only a change in the Maven packaging. javax. to jakarta., it is only a change in the Maven packaging. In-memory caching provided by Caffeine has been refactored into using s ServiceLoader pattern in order to improve Android compatibility. Thanks to Vitor Pamplona for the pull request!
In some cases in the JPA server, CodeSystem updating failed and left the server with a hung reindex job that never completes. This has been corrected. Thanks to GitHub user @tyfoni-systematic for the bug report!
When validating terminology as a part of resource validation or by directly calling the $validate-code operations, an incorrect caching routine sometimes caused incorrect cached results to be returned in cases where display names are being validated and multiple display names were validated for the same system+code combination. This has been corrected.
When indexing CodeSystem content in the terminology service, an unintended busy-wait cycle sometimes caused the CPU to be thrashed while waiting for a batch job to complete. This has been resolved.
The internal LinkedChannelQueue implementation used for Batch2 processing and Subscription delivery now uses automated retries if a message processing failure occurs in order to improve resiliency.
The HFJ_RESOURCE table has a new column FHIR_ID which always contains the FHIR resource id, both client-assigned and server-assigned. This is parallel storage to allow gradual upgrade away from the current HFJ_FORCED_ID join table, which should improve performance in a future release.
The JPA server can now handle indexing search parameters of type number where the value is a Range, e.g. RiskAssessment.probability.
The JPA server can now handle indexing search parameters of type token and reference where the value is a CodeableReference (a new datatype added in R5).
The NPM package installer did not support installing on R4B repositories. Thanks to Jens Kristian Villadsen for the pull request!
When performing create/update/patch/delete operations against the JPA server, the response OperationOutcome will now include additional details about the outcome of the operation. This includes:
OperationOutcome.issue.details.coding containing a machine processable equivalent to the outcome.The BundleBuilder now supports adding conditional DELETE operations, PATCH operations, and conditional PATCH operations to a transaction bundle.
When updating resources using a FHIR transaction in the JPA server, if the client instructs the server to include the resource body in the response, any tags that have been carried forward from previous versions of the resource are now included in the response.
The hapi-fhir-jpaserver-cql module has been replaced with a module called hapi-fhir-storage-cr. This was done to enable CQL evaluation and other clinical reasoning operations on non-JPA servers (like servers using MongoDb). Additionally, the new module supports some additional features for Measure evaluation, such as stratifiers and CQL language-level 1.5. The CQL engine and related components have also been upgraded to the latest versions.
When a REST client performs a search with an unknown search parameter, the resulting error message will now sort and deduplicate the list of allowable search parameters. Thanks to GitHub user @granadacoder for the pull request!
Previously, when $member-match operation is executed, the parameters that were returned did not include the member identifier as a parameter. This has now been added, and the consent resource is now updated with this identifier that's being returned.
Previously, the feature of Payload Search Result Mode only applied to Rest Hook subscriptions. Now, this feature also applies to Message channel type. Message delivery module will perform a search using user-defined criteria and the search results will be delivered to the user-specified endpoint channel as a bundle resource when Payload Search Result Mode is used.
Before A MDM POSSIBLE_MATCH link generated together with a POSSIBLE_DUPLICATE Golden Resource was missing the score value. this has been fixed.
Group bulk export async will not retrieve Device resources even if they're linked to a Patient and Group. This has been fixed by adding Device to the list of qualifying resources.
The JPA server now supports contention aware patch operations (i.e. using If-Match) within a FHIR Transaction. Previously the Bundle.entry.request.ifMatch element was incorrectly treated as a conditional URL and not a version tag.
The Thymeleaf narrative generator can now declare template fragments in separate files so that they can be reused across multiple templates.
The FhirPath evaluator framework now allows for an optional evaluation context object which can be used to supply external data such as responses to the resolve() function.
A new utility called CompositionBuilder has been added. This class can be used to generate Composition resources in a version-independent way, similar to the function of BundleUtil for Bundle resources.
A new experimental API has been added for automated generation of International Patient Summary (IPS) documents in the JPA server. This module is based on the excellent work of Rio Bennin and Panayiotis Savva of the University of Cyprus and was completed at FHIR Connectathon 34 in Henderson Nevada.
The Testpage Overlay module now supports customizable operation buttons on search results. For example, you can configure each row of the search results page to include a $validate button and a $diff button, or choose other operations relevant to your use case. Currently only operations which do not require any parameters are possible.
Providing the capability to specify that the name of the subscription matching channel should be unqualified at creation time.
Previously, jobs were scheduled via a @PostConstruct annotated method. This has been changed to an implementation of IJobScheduler interface so the job scheduler can schedule all the jobs after it has been started.
A new DaoConfig configuration setting has been added called JobFastTrackingEnabled, default false. If this setting is enabled, then gated batch jobs that produce only one chunk will immediately trigger a batch maintenance job. This may be useful for testing, but is not recommended for production use. Prior to this change, fasttracking was always enabled which meant if the server was not busy, small batch jobs would be processed quickly. However this lead do instability on high-volume servers, so this feature is now disabled by default.
The JPA server implementation of the $process-message operation has been extracted into a standalone provider called ProcessMessageProvider which can be registered individually on servers that have provided an implementation of the backing operation. Because this operation can not be implemented in a generic was in HAPI FHIR (it assumes business-specific processing logic) it does not make sense to include it by default.
Replace usages of ResourcePersistentId with an interface IResourcePersistentId and parameterize specific implementations of it in classes that rely on a specific implementation.
Common SearchParameter functionality has been refactored. In particular, 1) Validation logic performed inside JpaResourceDaoSearchParameter has been moved into a new class called SearchParameterDaoValidator 2) Logic for extracting combo unique and combo non-unique search parameters that was inside SearchParamWithInlineReferencesExtractor has been moved into BaseSearchParamExtractor and 3) a new interface called IResourceIndexComboSearchParameter has been added which provides a common interface for combo unique and combo non-unique search parameters.
The CircularQueueCaptureQueriesListener countXXXXQueries() methods have been reworked to improve accuracy. Query totals previously counted two SQL statement executions with the same SQL but different parameters as one query, this will now be counted as two queries.
The ca.uhn.fhir.rest.client.interceptor.LoggingInterceptor now logs request/response bodies at the DEBUG level. This is changed from INFO level.
During testing, the bulk-import command line tool logs file contents. This logging has been demoted to DEBUG level as bulk-import files may contain PHI.
The built-in narrative template for the OperationOutcome resource no longer renders the OperationOutcome.issue.diagnostic inside a <pre> tag, which made long messages hard to read.
ResourceDeliveryMessage no longer includes the payload in toString(). This avoids leaking sensitive data to logs and other channels.
Several issues with the OpenAPI generator have been fixed: * FHIR Operations will always include a POST invocation option * Non-primitive parameters will be excluded from GET invocations * For an idempotent (affectsState=false) endpoint with at least one required non-primitive parameter, only support POST * Also include _search endpoints for the Search operation Thanks to Michael Lawley for the pull request!
Previously, persistence modules were attempting to activate subscriptions that used channel types they did not support. This has been changed, and those subscriptions will not be activated if the given channel type is not supported
Previously, the MdmStorageInterceptor was needlessly logging a warning if theOldResource of an update was null. This is now fixed.
Previously, mdm links that connected resources and golden resources that were newly created had a link score of null, this changes it to 1.0 as the golden resource should be a perfect match with the source resource it was created from.
Before multiple MDM POSSIBLE_MATCH links pointing to different Golden Resources could be accepted (updated to MATCH). This has now been fixed, allowing only one MATCH.
Before DB table joins were LEFT OUTER by default, which was causing some queries to skip table indexes, making them very slow. This was more noticeable in H2 database, but affected all DB types in different measures. This has been fixed by using INNER joins, unless specific use case requires otherwise.
Before, When exporting a group including patients referenced in observations, observations were missed from output if Search Parameters Indexing was enabled. This is now fixed.
When using the client LoggingInterceptor with OkHttp clients, the request URL was not correctly logged. Thanks to Roel Scholten for the pull request!
Previously, when creating a bundle, if multiple resources matched the conditional URL provided in the request, it would process the request using the last resource found as a match. An error is now thrown when multiple resources are found
Previously, an ExecutionException was thrown when $expunge operations were used with a small batch size and a low thread count due to a race condition involving the expunge limit. This has been corrected.
Fixed bug with merging 2 Golden Resources together. When merging GP1 into GP2, the result was (incorrectly) GP2 REDIRECT GP1 (instead of GP1 REDIRECT GP2).
Fixed bug with merging 2 Golden Resources together. When merging a POSSIBLE_DUPLICATE of GP1 to GP2, or GP2 to GP1, depending on the order, a dangling self-referential link would sometimes remain (eg, GP1 POSSIBLE_MATCH GP1).
The ConceptMap/$translate operation did not work for transation against a ConceptMap resource ID if the ID was non-numeric. This has been fixed. Thanks to Panayiotis Savva for the report!
A race condition in the Cache factory has been resolved. This issue could cause strange errors such as NoSuchElementException if caches are created in multiple threads.
Broke out services for loading npm packages and parsing npm package resources that will not be dependent on any the DAO layer.
Made the partitionId field of ResourceModifiedMessage also available to ResourceOperationMessage. This can be used to manually select a partition for messages of this type
Previously, an excessive amount of time was required to perform update on a large Group. This has been corrected.
When running HAPI FHIR-based applications from within IntelliJ, an arbitrary ordering of PostConstruct methods in BaseHapiFhirResourceDao could cause a crash. This has been fixed.
Fixes a regression where IndexedSearchParamExtractor returns too few links when extracting search parameters from a resource that hasn't persisted yet. E.g. when a create interceptor needs to extract search parameters from the resource before it is persisted.
Previously, if a DSTU3 Resource was updated and no changes were made, the meta.versionId value was incremented incorrectly. This has been fixed.
Support was accidentally removed for date sa and eb modifiers in the date type search parameters. This has been corrected.
Previously, Patient Bulk Export on certain resources with reference to patient only in author or performer did not get exported. This has been fixed.
Running MDM update link will fail if resources were created with a previous MDM rules version. This is now fixed with a query that retrieves and updates MDM links regardless of the MDM version of the resource links.
Depending on configuration, validation will reject resource XML that contains comments, specifically, from the JSON that's generated from the resource object. Also, for this same configuration validation will fail with an error that it can't locate resource HapiFhirStorageResponseCode.json. Both errors are now fixed.
Previously, when use group bulk export, if the group was referenced to a patient resource and the patient has not been updated after the _since parameter of the bulk export, none of their data/sub-resources comes through even if the sub-resources were qualified (By qualified, I mean the sub-resources were upadted/created after _since parameter). Now, this issue has been fixed.
Fixed an edge case during a Read operation where hooks could be invoked with a null resource. This could cause a NullPointerException in some cases.
Fixed a bug where the PackageInstallerSvcImpl was not treating FHIR version R4 as being compatible with R4B.
The $mdm-clear operation sometimes failed with a constraint error when running in a heavily multithreaded environment. This has been fixed.
When running RestfulServer under heavy load or a slow network, the server sometimes logged an EOFException or an IOException in the system logs despite the response completing successfully. This has been corrected.
When Batch2 work notifications are received twice (e.g. because the notification engine double delivered) an unrecoverable failure could occur. This has been corrected.
Fixed a bug with batch2 which could cause previously completed chunks to be set back to in-progress. This could cause a batch job to never complete. Now, a safeguard to ensure a job can never return to in-progress once it has completed or failed.
Fixed a bug with $meta-delete which will cause the resource, when included as part of a search result, to have all its tags be missing.
When a Bulk Export job runs with a large amount of data, there is a chance the reduction step can be kicked off multiple times, resulting in data loss in the final report. Jobs will now be set to in-progress before processing to prevent multiple reduction steps from being started.
Previously, $export-poll-status requests through POST are failing with a Null Pointer exception. Now, this issue has been fixed and POST $export-poll-status request should be able to proceed.
Java API callers of the JPA IFhirResourceDao#search() method could experience a NPE if they invoked getResources() multiple times on the same response object. This is an edge case that is very unlikely to occur, but is now corrected.
Creating a resource with an invalid embedded resource reference would not fail. Even if IsEnforceReferentialIntegrityOnWrite was enabled. This has been fixed, and invalid references will throw.
A bug prevented valueset expansion with includeHierarchy=true when lucene/elasticsearch was not enabled. This has been corrected. Thanks to GitHub user @ivagulin for the contribution!
The HashMapResourceProvider failed to display history results if the history included deleted resources. This has been corrected.
Previously, when a ConceptMap was created or updated using a transaction Bundle, the transaction failed with error code HAPI-0550: could not execute statement. This has been fixed.
There is inconsistent validation in the case of an incorrect communications language coding due to validation that looks for the first correct Coding. In the case of a single incorrect code there is one validation result. In the case of one correct code and one incorrect code, there is an entirely different result. This has been fixed by introducing a new boolean flag in CachingValidationSupport as well as a new constructor to leverage it.
The FhirTerser#clone method failed to clone resources when contained resources were present in the source resource. This has been fixed.
Enabling mass ingestion mode alters the resource deletion process leaving resources partially deleted. The problem has been fixed.
Previously, a reindex batch job would fail when executing on deleted resources. This issue has been fixed.
Cross-partition subscription PUT with a custom interceptor will fail on validation because the read partition ID is used. This has been fixed by skipping validation if the validator invoked during an update operation
Previously, some MDM links of type POSSIBLE_MATCH were saved with unnormalized score values. This has been fixed.
Moved batch2 reduction step logic to the messaging queue. Before it was executed during the maintenance run directly. This resulted in bugs with multiple reduction steps kicking off for long running jobs.
Schedule bulk export job and binary was not working with relational databases. This has now been fixed with a reimplementation for batch 2.
Deleting CodeSystem resources by URL then expunging would fail to expunge and a foreign key error would be thrown. This has been fixed.
Previously, bulk export jobs were getting stuck in the FINALIZE state when performed with many resources and a low Bulk Export File Maximum Capacity. This has been fixed.
A new command has been added to the HAPI-FHIR CLI called clear-migration-lock. This can be used to fix a database state which can occur if a migration is interrupted before completing.
Previously, meta source field in subscription messages was inconsistently populated regarding different requests. Now, this has been fixed and meta source will be included in all subscription messages.
Fixing an issue where a long running reduction step causes the message not to be processed fast enough, thereby allowing multiple reduction step jobs to start.
Three database columns have been changed from type TEXT to type OID when running in Postgres: BT2_JOB_INSTANCE.PARAMS_JSON_LOB, BT2_JOB_INSTANCE.REPORT, and BT2_WORK_CHUNK.CHUNK_DATA. This prevents VACUUM erasing binary objects that are still in use.
Previously, if a message with a null header was sent to a Channel Import module and failed, a NullPointerException would occur and the consumer would become unable to receive any further messages. This has now been fixed.
Released: 2022年11月25日
Codename: (Vishwa)
This version fixes a bug with 6.2.0 and previous releases wherein batch jobs that created very large chunk counts could occasionally fail to submit a small proportion of chunks.
The NPM package installer did not support installing on R4B repositories. Thanks to Jens Kristian Villadsen for the pull request!
Released: 2022年11月17日
Codename: (Vishwa)
This version fixes a bug with 6.2.0 and previous releases wherein batch jobs that created very large chunk counts could occasionally fail to submit a small proportion of chunks.
Released: 2022年11月17日
Codename: (Vishwa)
This release has breaking changes.
To recreate Terminology freetext indexes use CLI command: reindex-terminology
The version of a few dependencies have been bumped to the latest versions (dependent HAPI modules listed in brackets):
Added a new optional parameter to the upload-terminology operation of the HAPI-FHIR CLI. you can pass the -s or --size parameter to specify the maximum size that will be transmitted to the server, before a local file reference is used. This parameter can be filled in using human-readable format, for example: upload-terminology -s "1GB" will permit zip files up to 1 gigabyte, and anything larger than that would default to using a local file reference.
Previously, DELETE request type is not supported for any operations. DELETE is now supported, and is enabled for operation $export-poll-status to allow cancellation of jobs
Provided the ability to have the NPM package installer skip installing a package if it is already installed and matches the version requested. This can be controlled by the reloadExisting attribute in PackageInstallationSpec. It defaults to true, which is the existing behaviour. Thanks to Craig McClendon (@XcrigX) for the contribution!
Added support for AWS OpenSearch to Fulltext Search. If an AWS Region is configured, HAPI-FHIR will assume you intend to connect to an AWS-managed OpenSearch instance, and will use Amazon's DefaultAwsCredentialsProviderChain to authenticate against it. If both username and password are provided, HAPI-FHIR will attempt to use them as a static credentials provider.
When using ForcedOffsetSearchModeInterceptor, any synchronous searches initiated programmatically (i.e. through the internal java API, not the REST API) will not be modified. This prevents issues when a java call requests a synchronous search larger than the default offset search page size.
Previously, Patient Bulk Export only supported endpoint [fhir base]/Patient/$export, which exports all patients. Now, Patient Export can be done at the instance level, following this format: [fhir base]/Patient/[id]/$export, which will export only the records for one patient. Additionally, added support for the patient parameter in Patient Bulk Export, which is another way to get the records of only one patient.
Added support for the :text modifier on string search parameters. This corrects an issue when using Elastic/Lucene indexing enabled where prefix match was used instead.
LOINC terminology upload process was enhanced to consider 24 additional properties which were defined in loinc.csv file but not uploaded.
A new built-in server interceptor called the InteractionBlockingInterceptor has been added. This interceptor allows individual operations to be included/excluded from a RestfulServer's exported capabilities.
The OpenApi generator now allows additional CSS customization for the Swagger UI page, as well as the option to disable resource type pages.
Previously, Group Bulk Export did not support the inclusion of resources referenced in the resources in the patient compartment. This is now supported.
LOINC terminology upload process was enhanced by loading MAP_TO properties defined in MapTo.csv input file to TermConcept(s).
Added new attribute for the @Operation annotation to define the operation's canonical URL. This canonical URL value will populate the operation definition in the CapabilityStatement resource.
Previously, the number of resources per binary file in bulk export was a static 1000. This is now configurable by a new DaoConfig property called 'setBulkExportFileMaximumCapacity()', and the default value is 1000 resources per file.
A new interceptor pointcut STORAGE_TRANSACTION_PROCESSING has been added. Hooks for this pointcut can examine and modify FHIR transaction bundles being processed by the JPA server before processing starts.
For SNOMED CT, upload-terminology now supports both Canadian and International edition's file names for the SCT Description File
Support has been added for FHIR R4B (4.3.0). See the R4B Documentation for more information on what this means.
By default, if the $export operation receives a request that is identical to one that has been recently processed, it will attempt to reuse the batch job from the former request. A new configuration parameter has been introduced that disables this behavior and forces a new batch job on every call.
$mdm-submit can now be run as a batch job, which will return a job ID, and can be polled for status. This can be accomplished by sending a Prefer: respond-async header with the request.
A remove method has been added to the Batch2 job registry. This will allow for dynamic job registration in the future.
Added new System Property called 'CLEAR_LOCK_TABLE_WITH_DESCRIPTION' that when set to the uuid of a lock record, will clear that lock record before attempting to insert a new one.
When using SearchNarrowingInterceptor, FHIR batch operations with a large number of conditional create/update entries exhibited very slow performance due to an unnecessary nested loop. This has been corrected.
Processing for _include and _revinclude parameters in the JPA server has been streamlined, which should improve performance on systems where includes are heavily used.
All Spring Batch dependencies and services have been removed. Async processing has fully migrated to Batch 2.
The Partition Interceptor has been refactored out of the JPA module in order to facilitate future use in other modules. No functional changes have been made.
Removed Flyway database migration engine. The migration table still tracks successful and failed migrations to determine which migrations need to be run at startup. Database migrations no longer need to run differently when using an older database version.
Changed Minimum Size (bytes) in FHIR Binary Storage of the persistence module from an integer to a long.
Cascading deletes don't work correctly if multiple threads initiate a delete at the same time. Either the resource won't be found or there will be a collision on inserting the new version. This changes fixes the problem by better handling these conditions to either ignore an already deleted resource or to keep retrying in a new inner transaction..
With Elasticsearch configured, including terminology, an exception was raised while expanding a ValueSet with more than 10,000 concepts. This has now been fixed.
Initial page loading has been optimized to reduce the number of prefetched resources. This should improve the speed of initial search queries in many cases.
Previously, Celsius and Fahrenheit temperature quantities were not normalized. This is now fixed. This change requires reindexing of resources containing Celsius or Fahrenheit temperature quantities.
With Elastic/Lucene indexing enabled search for numbers or quantities was not always using the specified ranges, because the number of significant figures was not properly calculated. This is now fixed.
When serializing references where the reference target has been instantiated using a resource object and not a reference string, the serialization was omitted when the reference appeared in the resource metadata section. This has been corrected.
Providing a meaningful response message when deleting non-existing or already deleted resources.
Previously, unexpected response status code is not handled by the import-poll-status operation. Now, it is fixed and throwing an error message.
During bulk import, fetching resource files from a client server would fail if the server response specified a content-type that was not application/fhir+json. Validation has been loosened to accept content-type values of text/plain and application/json.
Fixing bug where searching with a target resource parameter (Coverage:payor:Patient) as value to an _include parameter would fail with a 500 response.
There are now 2 different methods of Missing Fields search. One that works if Enable Missing Fields Search is enabled, and one that works if it is not. These 2 are not compatible together.
Previously, if the Endpoint Base URL is set to something different from the default value, the URL that export-poll-status returned is incorrect. After correcting the export-poll-status URL, the binary file URL returned is also incorrect. This error has also been fixed and the URLs that are returned from $export and $export-poll-status will not contain the extra path from 'Fixed Value for Endpoint Base URL'.
Previously, when a client would provide a requestId within the source uri of a Meta.source, the provided requestId would get discarded and replaced by an id generated by the system. This has been corrected. And now it depends on configuration.
Previously, using the import-csv-to-conceptmap command in the CLI successfully created ConceptMap resources without a ConceptMap.status element, which is against the FHIR specification. This has been fixed by adding a required option for status for the command.
Fast-tracking batch jobs that produced only one chunk has been rewritten to use Quartz triggerJob. This will ensure that at most one thread is updating job status at a time. Also jobs that had FAILED, ERRORED, or been CANCELLED could be accidentally set back to IN_PROGRESS; this has been corrected.
Previously, if a FullTextSearchSvcImpl was defined, but was disabled via configuration, there could be data loss when reindexing due to transaction rollbacks. This has been corrected. Thanks to @dyoung-work for the fix!
Previously, the :nickname qualifier only worked with the predefined name and given SearchParameters. This has been fixed and now the :nickname qualifier can be used with any string SearchParameters.
A regression was introduced in 6.1.0 which caused bulk export jobs to not default to the correct output format when the _outputFormat parameter was omitted. This behaviour has been fixed, and if omitted, will now default to the only legal value application/fhir+ndjson.
Previously, bulk export for Group type with _typeFilter did not apply the filter if it was for the patients, and returned all members of the group. This has now been fixed, and the filter will apply.
Previously when ValueSets were pre-expanded after loinc terminology upload, expansion was failing with an exception for each ValueSet with more than 10,000 properties. This problem has been fixed. This fix changed some freetext mappings (definitions about how resources are freetext indexed) for terminology resources, which requires reindexing those resources. To do this use the reindex-terminology command.
Previously, when executing a '[base]/_history' search, '_since' and '_at' shared the same behaviour. When a user searched for the date between the records' updated date with '_at', the record of '_at' time was not returned. This has been corrected. '_since' query parameter works as it previously did, and the '_at' query parameter returns the record of '_at' time.
In HAPI-FHIR 6.1.0, a regression was introduced into bulk export causing exports beyond the first one to fail in strange ways. This has been corrected.
Previously, creating a DSTU3 SearchParameter with an expression that does not start with a resource type would throw an error. This has been corrected.
When running HAPI FHIR inside intellij with runtime Nonnull assertions enabled, a sequencing issue in OperationMethodBinding could cause a null pointer exception. This has been corrected.
Fixed a bug in Group Bulk Export: If a group member was part of multiple groups , it was causing other groups to be included during Group Bulk Export, if the Group resource type was specified. Now, when doing an export on a specific group, and you elect to return Group resources, only the called Group will be returned, regardless of cross-membership.
Fixed a bug in Group Bulk Export where the server would crash in oracle due to too many clauses.
Fixed a Group Bulk Export bug which was causing it to fail to return resources due to an incorrect search.
Previously, when the upload-terminology command was used to upload a terminology file with endpoint validation enabled, a validation error occurred due to a missing file content type. This has been fixed by specifying the file content type of the uploaded file.
Search for strings with :text qualifier was not performing advanced search. This has been corrected.
There was a bug in content-type negotiation when reading Binary resources. Previously, when a client requested a Binary resource and with an Accept header that matched the contentType of the stored resource, the server would return an XML representation of the Binary resource. This has been fixed, and a request with a matching Accept header will receive the stored binary data directly as the requested content type.
Fixed the $poll-export-status endpoint so that when a job is complete, this endpoint now correctly includes the request and requiresAccessToken attributes.
When enabling validation on a fhir_endpoint module, a bundle with a Measure resource referencing a Library resource or a MeasureReport resource referencing a Measure, validation will fail with an IllegalArgumentException complaining about the Library or Measure resource, respectively. The fix ensures that Library and Measure resources are handled correctly and all of validation can proceed with a report of all success and errors.
Fixed a bug where the $everything operation on Patient instances and the Patient type was not correctly propagating the transactional semantics. This was causing callers to not be in a transactional context.
Modified BinaryAccessProvider to use a safer method of checking the contents of an input stream. Thanks to @ttntrifork for the fix!
A bug prevented the DSTU2 JPA server Conformance.implementation.description field from being populated in some cases. This has been corrected.
Previously, the error segment of the $poll-export-status operation was missing. This has now been added.
Filter by range for numeric extension does not work after update. hfj_spidx_number record deleted after PUT update on SearchParameter value for Resource
A previous fix resulted in Bulk Export files containing mixed resource types, which is not allowed in the bulk data access IG. This has been corrected.
A previous fix resulted in Bulk Export files containing duplicate resources, which is not allowed in the bulk data access IG. This has been corrected.
MDM messages were using the resource id as a message key when it should be using the EID as a partition hash key. This could lead to duplicate golden resources on systems using Kafka as a message broker.
Previously, enabling 'indexMissingFields' and 'advancedHSearchIndexing' functionalities would cause creating a Patient resource to return HTTP 500. Upon further investigation, the same happened for creating an Observation. This has been fixed.
In the JPA server, when deleting a resource the associated tags were previously deleted even though the FHIR specification states that tags are independent of FHIR versioning. After this fix, resource tags and security labels will remain when a resource is deleted. They can be fetched using the $meta operation against the deleted resource, and will remain if the resource is brought back in a subsequent update.
In the JPA server, when a resource is being updated, the response will now include any tags or security labels which were not present in the request but were carried forward from the previous version of the resource.
A transaction scoping bug in Batch2 has been resolved, preventing a crash on Postgres. We have also standardized on the Spring @Transactional annotation, and removed use of the equivalent javax.transaction annotation.
Two issues with reverse chaining (i.e. the _has search parameter) have been addressed: * Searching for a reverse chain with a target search parameter of _id did not work correctly, e.g. Patient?_has:Observation:subject:_id=Patient/123 * Searching with a combination of a forward chain and a reverse chain did not work correctly if indexing contained resources was enabled, e.g. Observation?subject._has:Group:member:_id=Group/123
Upon hitting a subscription delivery failure, we currently log the failing payload which could be considered PHI. Resource content is no longer written to logs on subscription failure.
Previously when updating a phonetic search parameter, any existing resource will not have its search parameter String updated upon reindex if the normalized String is the same letter as under the old algorithm (ex JN to JAN). Searching on the new normalized String was failing to return results. This has been corrected.
Loading us-core IG was raising UnprocessableEntityException: HAPI-2131: Can't process submitted SearchParameter as it is overlapping an existing one. This problem has been fixed.
When storing a blob, the database blob binary storage service may store the blob size as being much smaller than the actual blob size. This issue has been fixed.
When triggering a batch export, the number of resources processed is returned as zero. This is now fixed to return the total number of resources processed across a single or multiple bundles.
Previously, performing $validate operation on resource update and sending the resource as a parameter in the request body would result in an error stating that the resource was missing id. This has been fixed by allowing the resource to be processed correctly when sent as a parameter.
Support has been added for clustered database upgrades in the new (non-FlyWay) migrator codebase. Only one database migration is permitted at once.
When DaoConfig UpdateWithHistoryRewriteEnabled is enabled, the package loader throws a NullPointerException. This has been corrected.
Previously, when using _offset, the queries will result in short pages, and repeats results on different pages. This has now been fixed.
Bulk Group export was failing to export Patient resources when Client ID mode was set to: ANY. This has been fixed
Previously to improve performance, if the total number of resources was less than the _getpageoffset, the results would default to last resource offset. This is especially evident when requests are consecutive resulting in one entry being displayed in some requests. This issue is now fixed.
Generating Bundle with resources was setting entry.request.url as absolute url when it should be relative. This has been fixed
Performing a bulk export with an _outputParam value encoded in a GET request URL that contains a '+' (ex: 'application/fhir+ndjson') will result in a 400 because the '+' is replaced with a ' '. After this fix the '+' will remain in the parameter value.
Previously during Bulk Export, if no _type parameter was provided, an error would be thrown. This has been changed, and if the _type parameter is omitted, Bulk Export will default to all registered types.
Fixed a bug which caused a failure when combining a Consent Interceptor with version conversion via the Accept header.
Previously, when creating a DocumentReference with an Attachment containing a URL over 254 characters an error was thrown. This has been corrected and now an Attachment URL can be up to 500 characters.
Previously, Bulk Export jobs were always reused, even if completed. Now, jobs are only reused if an identical job is already running, and has not yet completed or failed.
Previously, SearchParameters with identical codes and bases could be created. This has been corrected. If a SearchParameter is submitted which is a duplicate, it will be rejected.
Previously, if the $reindex operation failed with a ResourceVersionConflictException the related batch job would fail. This has been corrected by adding 10 retry attempts for transactions that have failed with a ResourceVersionConflictException during the $reindex operation. In addition, the ResourceIdListStep was submitting one more resource than expected (i.e. 1001 records processed during a $reindex operation if only 1000 Resources were in the database). This has been corrected.
Database migration steps were failing with Oracle 19C. This has been fixed by allowing the database engine to skip dropping non-existent indexes.
The ActionRequestDetails class has been dropped (it has been deprecated since HAPI FHIR 4.0.0). This class was used as a parameter to the SERVER_INCOMING_REQUEST_PRE_HANDLED interceptor pointcut, but can be replaced in any existing client code with RequestDetails. This change also removes an undocumented behaviour where the JPA server internally invoked the SERVER_INCOMING_REQUEST_PRE_HANDLED a second time from within various processing methods. This behaviour caused performance problems for some interceptors (e.g. SearchNarrowingInterceptor) and no longer offers any benefit so it is being removed.
The interceptor system has now deprecated the concept of ThreadLocal interceptors. This feature was added for an anticipated use case, but has never seen any real use that we are aware of and removing it should provide a minor performance improvement to the interceptor registry.
Released: 2022年10月06日
Codename: (Unicorn)
MDM messages were using the resource id as a message key when it should be using the EID as a partition hash key. This could lead to duplicate golden resources on systems using Kafka as a message broker.
Released: 2022年09月12日
Codename: (Unicorn)
Added support for AWS OpenSearch to Fulltext Search. If an AWS Region is configured, HAPI-FHIR will assume you intend to connect to an AWS-managed OpenSearch instance, and will use Amazon's DefaultAwsCredentialsProviderChain to authenticate against it. If both username and password are provided, HAPI-FHIR will attempt to use them as a static credentials provider.
In HAPI-FHIR 6.1.0, a regression was introduced into bulk export causing exports beyond the first one to fail in strange ways. This has been corrected.
Fixed a bug in Group Bulk Export: If a group member was part of multiple groups , it was causing other groups to be included during Group Bulk Export, if the Group resource type was specified. Now, when doing an export on a specific group, and you elect to return Group resources, only the called Group will be returned, regardless of cross-membership.
Fixed a bug in Group Bulk Export where the server would crash in oracle due to too many clauses.
Fixed a Group Bulk Export bug which was causing it to fail to return resources due to an incorrect search.
Released: 2022年09月02日
Codename: (Unicorn)
A regression was introduced in 6.1.0 which caused bulk export jobs to not default to the correct output format when the _outputFormat parameter was omitted. This behaviour has been fixed, and if omitted, will now default to the only legal value application/fhir+ndjson.
Previously, bulk export for Group type with _typeFilter did not apply the filter if it was for the patients, and returned all members of the group. This has now been fixed, and the filter will apply.
Released: 2022年08月18日
Codename: (Unicorn)
When upgrading to this release, there are a few important notes to be aware of:
This release removes the Legacy Search Builder. If you upgrade to this release, the new Search Builder will automatically be used.
This release will break existing implementations which use the subscription delete feature. The place where the extension needs to be installed on the Subscription resource has now changed. While it used to be on the top-level Subscription resource, the extension should now be added to the Subscription's channel element.
Here is how the subscription should have looked before:
{
"resourceType": "Subscription",
"id": "1",
"status": "active",
"reason": "Monitor resource persistence events",
"criteria": "Patient",
"channel": {
"type": "rest-hook",
"payload": "application/json"
},
"extension": [
{
"url": "http://hapifhir.io/fhir/StructureDefinition/subscription-send-delete-messages",
"valueBoolean": "true"
}
]
}
And here is how it should now look:
{
"resourceType": "Subscription",
"id": "1",
"status": "active",
"reason": "Monitor resource persistence events",
"criteria": "Patient",
"channel": {
"extension": [
{
"url": "http://hapifhir.io/fhir/StructureDefinition/subscription-send-delete-messages",
"valueBoolean": "true"
}
],
"type": "rest-hook",
"payload": "application/json"
}
}
The version of a few dependencies have been bumped to the latest versions (dependent HAPI modules listed in brackets):
Refactored mdm classes to use ResourcePersistenceId rather than prematurely converting to longs
Previously, the http://hapifhir.io/fhir/StructureDefinition/subscription-send-delete-messages extension on REST-HOOK subscription channel element was only valid for R4. This has been expanded to support DSTU3 and DSTU2.
The SearchParameterMap query normalizer did not support _include and _revinclude parameters with the :recurse qualifier. This has been corrected. Thanks to Augustas Vilčinskas for the PR!
Experimental support for CockroachDB has been added to the JPA server. Thanks to Joe Shook for the contribution!
The ValueSet/$expand operation can now optionally support the displayLanugage parameter. Thanks to Gjergj Sheldija for the pull request!
Add support for the :in and :not-in qualifiers for use in SMART on FHIR v2 granular scope definition.
If a gated Batch job step and all prior steps produced only a single chunk of work, notify the next step immediately instead of waiting for the scheduled job maintenance to advance it.
Extended Lucene/Elasticsearch indexing of search parameters now supports _tag, _profile and _security.
Currently neither HAPI-FHIR nor the FHIR specification supports modifying historical version of a resource. This implementation adds the feature Update with History Rewrite to HAPI-FHIR. It can be accessed via the IGenericClient's updateHistoryRewrite method
The rule system has been extended to support optional fhir-expressions that narrow the application of a rule. A matching RuleFilteringConsentService implements post-query support for this filtering using an in-memory matcher.
Added support to Bulk Export for FHIR Response Terminology Translation. It will use the same mappings as the Translation Interceptor.
Adding some test cases for CQL measures on immunization as well as some testing methods to support easier changes of CQL in unit tests.
If provided, the practitioner parameter for $evaluate-measure is now also be passed on to measures of type population (not only for type subject-list).
Added support for loading the International version of ICD-10. Thanks to kaicode for the contribution!
Batch2 job definitions can now optionally provide an error handler callback that will be called when a job instance fails, errors or is cancelled.
JPA server patch operation has been moved to the hapi-fhir-storage project, for reuse in other backends.
Previously, the HAPI FHIR CLI commands that made HTTP requests could only be used for HTTP endpoints. This feature adds support for HTTPS endpoints by using TLS authentication for ExampleDataUploader, ExportConceptMapToCsvCommand, ImportCsvToConceptMapCommand, ReindexTerminologyCommand and UploadTerminologyCommand.
The InMemoryMatcher used by Subscription matching not supports the token :not modifier. The :not-in modifier was also corrected for cases of multiple values in a , separated or-list.
Adding pointcut MDM_BEFORE_PERSISTED_RESOURCE_CHECKED allowing for the customization of source resource before MDM processing.
The 'SUBSCRIPTION_BEFORE_MESSAGE_DELIVERY' pointcut now supports a new parameter, ResourceModifiedJsonMessage. This permits interceptor implementers to modify the outgoing envelope before it is sent off.
Providing a specialized exception encapsulating the mechanism for reporting TokenParam validation failure when searching resources on parameter _tag and _security. The new exception was introduced primarily for reusability purposes.
Chained searches have been sped up for configurations where the Index Contained Resources feature is enabled.
When the Mark Resources for Reindexing after SearchParameter change configuration parameter is enabled, SMILE will use the Batch 2 framework to perform the reindexing operation.
The Legacy Search Builder has been removed in favour of the new Search Builder. The DaoConfig will retain its setter for deprecation purposes, but that will also be removed after the next release.
Fixed an issue where FindCandidateByLinkSvc still uses long to store persistent ids rather than IResourcePersistentId, also fixed an issue where MdmLinkCreateSvc was not setting the resource type of MdmLinks properly
Previously, R5 Appointment resources would fail to save in JPA servers due to a path splitting problem during Search Parameter extraction. This has been corrected.
Previously, PATCH operations that were contained in a transaction bundle would not return response entries. This has been corrected.
Previously, the $expunge operation at the system level was always available, regardless of configuration parameter settings. Now, the system-level $expunge operation requires that the Expunge Enabled parameter is enabled. Additionally, the expungeEverything option of the operation requires that the Allow Multiple Delete Enabled parameter is enabled.
Nickname matching only matched formal names to a nickname. This has been broadened to match nicknames to other nicknames for the same formal name.
Previously, the RuleBuilder's rules surrounding Group Bulk Export would return failures too early in the case of multiple permissions. This has been corrected, and the rule will no longer prematurely return a DENY verdict, instead opting to delegate to future rules.
Removed a strict dependency on a FullTextSVC. This was causing HAPI to fail to boot when Lucene/ElasticSearch was not enabled.
MS SQL Standard Edition does not support the ONLINE=ON feature for index creation, and failed during upgrades to 6.0. Upgrades now detect these limitations and avoid this feature when unsupported.
Fhir patch operation returning 500 when patching in a transaction with a resource query in property request.url.
Previously subscriptions in a partition with the id null will be matched against incoming resources from all partitions. Changed to subscriptions will only match against incoming resources in the partition the subscription exists in unless cross partition subscription is enabled and the subscription has the appropriate extension.
A change in H2's new version caused resources >1mb in size to fail to correctly store. This has been corrected. Thanks to Patrick Werner for the report and pull request!
Moved the http://hapifhir.io/fhir/StructureDefinition/subscription-send-delete-messages from the Subscription to the Subscription's Channel element.
hapi now populates issue.detail with the messageID from the core validator, org.hl7.fhir.core was updated to the latest version (5.6.48) Thanks to Patrick Werner for the feature request and pull request!
Previously, deleted resources with client generated ids were being included in the bundle total when searching by _id. This has been corrected by adding functionality to optionally filter out deleted resources when resolving forced ids to persistent ids.
Previously, Group Bulk Export would expand into other groups, due to the nature of Group.member being part of the patient search compartment. This has been fixed, and now, Group Bulk Export will only ever export the Group resource specified in the request, regardless of patient membership.
Previously, using the http://hapifhir.io/fhir/StructureDefinition/subscription-send-delete-messages extension on a subscription, along with a subscription that used a Search Expression criteria (e.g. Patient?gender=male) would cause a failure during an attempt to send the delete. This has been corrected.
Codes with the Seventh Character will be dynamically generated while uploading the ICD10-CM diagnosis code file, and the corresponding extended description can be extract using the code.
Changing the Unknown resource type error message to include only resource provider classes and exclude plain providers classes.
Server was blocking updates with Forbidden 403 that included a resource version in the url without the X-Rewrite-History header. This has been corrected.
Fixed the $expunge operation that expunge everything response will return the correct number of dropped resources.
Previously, the DSTU3 Conformance provider was not including searchInclude or searchRevInclude results in the conformance response. This has been corrected.
Fixed a bug where NPE occurs while updating references for posting a bundle with duplicate resource in conditional create and another conditional verb (delete).
Fixed a regression in 6.0.0 which caused the SearchNarrowingInterceptor to occcasionally be applied to non-search operations.
It was possible under certain race conditions for the batch2 completion handler to be called twice. This has been corrected.
Batch2 will have a standard error handling that will fail the job if it fails a chunk processing for more than 3 times. Further, added better validation to reindex job to disallow bad urls.
A batch2 state change regression was introduced recently that resulted in batch2 jobs not being properly completed. This has been corrected.
FHIR queries using date sort were ignoring seconds and milliseconds when using lucene/elasticsearch SearchParameter indexing. This has been corrected.
Previously, if a FHIR patch was performed on a soft deleted resource, the patch was successfully applied and the resource was undeleted and populated with only the data contained in the patch. This has been fixed so that patch on deleted resources now issues a HTTP 410 response.
Previously, the FHIRPath PATCH operation type add was replacing the entire element as opposed to adding the new element to the existing element. This has been corrected.
For Lucene/Elastic previously we were using scrolled results even when performing synchronous searches. That has now been fixed.
When starting hapi-fhir from starter application, BeanDefinitionOverrideException was thrown with message: Invalid bean definition with name loadIdsStep. This issue is now fixed.
Elastic/Lucene documentation was updated to indicate that full reindex is required when enabling storing resource bodies in indexes when indexed resources exist.
Migrate existing Term Code System Delete and Term Code System Delete Version jobs to batch 2 framework
Previously when a Subscription was created using the package installer and with partitioning enabled, there was an error reading the partition name inside the SubscriptionRegisteringSubscriber. This fix checks incoming Subscription requests for a RequestPartitionId with a list of partition names containing null values and uses RequestPartitionId#defaultPartition() to obtain the default partition instead.
Previously, when doing a GET operation with an _include as a bundle entry could occasionally miss returning some results. This has been corrected, and queries like '/Medication?_include=Medication:ingredient' will now correctly include the relevant target resources.
Previously, command prompt was not returned after initiating bulk import operation and even when the importing was completed. This has been fixed, and the command prompt will return after the uploading process finished.
Previously a lucene/elastic enabled search including offset=0 and count > 50 was returning only 50 resources this has now been fixed.
Previously a lucene/elastic enabled sorted search including offset greater than zero was not sorting results. This has now been fixed.
Previously, when modifying one of the common search parameters, if the Mark Resources For Reindexing Upon Search Parameter Change configuration parameter was enabled, an invalid reindex batch job request would be created. This has been fixed, and the batch job request will contain no URL constraints, causing all resources to be reindexed.
Previously, if an intermediate step of a gated batch job produced no output data, an exception would be thrown and processing of the step would abort, leaving the batch job in an unrecoverable state. This has been fixed.
Previously, if an error was thrown after the outgoing response stream had started being written to, an OperationOutcome was not being returned. Instead, an HTML servlet error was being thrown. This has been corrected.
Prevent creating overlapping SearchParameter resources which would lead to random search behaviors.
Removed a bug with subscriptions that supported sending deletes. Previously, if there was a previously-registered subscription which did not support deletes, that would shortcircuit processing and cause subsequent subscriptions not to fire. This has been corrected.
Searches using code:in and code:not-in will now expand the valueset up to the Maximum Expansion Size defined in the DaoConfig.
Fixed bug where creating a new resource in partitioned mode using a PUT operation invoked pointcut STORAGE_PARTITION_IDENTIFY_READ during validation. This caused errors because read interceptors (listing on Pointcut.STORAGE_PARTITION_IDENTIFY_READ) and write interceptors (listening on Pointcut.STORAGE_PARTITION_IDENTIFY_CREATE) could return different partitions (say, all vs default). Now, only the CREATE pointcut will be invoked, and the same partition will be used for any reads during UPDATE.
Previously, if Fulltext Search was enabled, and a $delete-expunge job was run, it could leave orphaned documents in the Fulltext index. This has been corrected.
Fixed a bug where repeated calls to the same Bulk Export request would not return the cached result, but would instead start a new Bulk Export job.
When multiple permissions exist for a user, granting access to the same compartment for different owners, the permissions will be collapsed into one rule. Previously, if these permissions had filters, only the the filter of the first permission in the list would be applied, but it would apply to all of the owners. This has been fixed by turning off the collapse function for permissions with filters, converting each one to a separate rule.
When offset searches are enforced on the server, not all of the search parameters were loading and this caused a NPE. Now an ERROR will be logged instead.
Fixed an issue with $delete-expunge that resulted in SQL errors on large amounts of deleted data
Released: 2022年08月29日
Codename: (Tanuki)
Fixing bug where searching with a target resource parameter (Coverage:payor:Patient) as value to an _include parameter would fail with a 500 response.
Released: 2022年07月22日
Codename: (Tanuki)
Previously, the http://hapifhir.io/fhir/StructureDefinition/subscription-send-delete-messages extension on REST-HOOK subscription channel element was only valid for R4. This has been expanded to support DSTU3 and DSTU2.
Released: 2022年07月18日
Codename: (Tanuki)
Previously, using the http://hapifhir.io/fhir/StructureDefinition/subscription-send-delete-messages extension on a subscription, along with a subscription that used a Search Expression criteria (e.g. Patient?gender=male) would cause a failure during an attempt to send the delete. This has been corrected.
Released: 2022年06月14日
Codename: (Tanuki)
Released: 2022年05月25日
Codename: (Tanuki)
Previously, the RuleBuilder's rules surrounding Group Bulk Export would return failures too early in the case of multiple permissions. This has been corrected, and the rule will no longer prematurely return a DENY verdict, instead opting to delegate to future rules.
Removed a strict dependency on a FullTextSVC. This was causing HAPI to fail to boot when Lucene/ElasticSearch was not enabled.
Released: 2022年05月18日
Codename: (Tanuki)
This release has breaking changes, and some large database changes.
Some freetext mappings (definitions about how resources are freetext indexed) were changed for Terminology resources. This change requires a full reindexing for any Smile CDR installations which make use of the following features:
To reindex all resources call:
POST /$reindex Content-Type: application/fhir+json
{
"resourceType": "Parameters",
"parameter": [ {
"name": "everything",
"valueBoolean": "true"
}, {
"name": "batchSize",
"valueDecimal": 1000
} ]
}
The JPA SearchParameter database indexing has been redesigned in 6.0. As a result, performing the upgrade may take longer than usual. Database migrations happen automatically after server upgrade during the next restart, and the server is unavailable during this migration window. To avoid this prolonged outage during server restart, you can apply these index changes manually before upgrading the server. The server is compatible with both the old and new indexes. The syntax for index definition varies, and we include separate scripts for MS Sql, Postgres, and Oracle.
This script updates the indexing, and assumes Enterprise Edition.
If you are running Standard Edition, edit the script and remove the WITH (ONLINE = ON) clauses.
Note: Without this feature, the database tables will be locked during the creation of each index, and you will be unable to save, update, or delete any resource until the statement completes.
-- Table: HFJ_SPIDX_DATE create index IDX_SP_DATE_HASH_V2 on HFJ_SPIDX_DATE(HASH_IDENTITY, SP_VALUE_LOW, SP_VALUE_HIGH, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_DATE.IDX_SP_DATE_HASH;
drop index HFJ_SPIDX_DATE.IDX_SP_DATE_HASH_LOW;
create index IDX_SP_DATE_HASH_HIGH_V2 on HFJ_SPIDX_DATE(HASH_IDENTITY, SP_VALUE_HIGH, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_DATE.IDX_SP_DATE_HASH_HIGH;
create index IDX_SP_DATE_ORD_HASH_V2 on HFJ_SPIDX_DATE(HASH_IDENTITY, SP_VALUE_LOW_DATE_ORDINAL, SP_VALUE_HIGH_DATE_ORDINAL, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_DATE.IDX_SP_DATE_ORD_HASH;
create index IDX_SP_DATE_ORD_HASH_HIGH_V2 on HFJ_SPIDX_DATE(HASH_IDENTITY, SP_VALUE_HIGH_DATE_ORDINAL, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_DATE.IDX_SP_DATE_ORD_HASH_LOW;
create index IDX_SP_DATE_RESID_V2 on HFJ_SPIDX_DATE(RES_ID, HASH_IDENTITY, SP_VALUE_LOW, SP_VALUE_HIGH, SP_VALUE_LOW_DATE_ORDINAL, SP_VALUE_HIGH_DATE_ORDINAL, PARTITION_ID) WITH (ONLINE = ON);
alter table HFJ_SPIDX_DATE drop constraint FK17S70OA59RM9N61K9THJQRSQM;
drop index HFJ_SPIDX_DATE.IDX_SP_DATE_RESID;
ALTER TABLE HFJ_SPIDX_DATE DROP CONSTRAINT FK17s70oa59rm9n61k9thjqrsqm;
alter table HFJ_SPIDX_DATE add constraint FK_SP_DATE_RES foreign key (RES_ID) references HFJ_RESOURCE;
drop index HFJ_SPIDX_DATE.IDX_SP_DATE_UPDATED;
-- Table: HFJ_SPIDX_TOKEN create index IDX_SP_TOKEN_HASH_V2 on HFJ_SPIDX_TOKEN(HASH_IDENTITY, SP_SYSTEM, SP_VALUE, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_TOKEN.IDX_SP_TOKEN_HASH;
create index IDX_SP_TOKEN_HASH_S_V2 on HFJ_SPIDX_TOKEN(HASH_SYS, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_TOKEN.IDX_SP_TOKEN_HASH_S;
create index IDX_SP_TOKEN_HASH_SV_V2 on HFJ_SPIDX_TOKEN(HASH_SYS_AND_VALUE, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_TOKEN.IDX_SP_TOKEN_HASH_SV;
create index IDX_SP_TOKEN_HASH_V_V2 on HFJ_SPIDX_TOKEN(HASH_VALUE, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_TOKEN.IDX_SP_TOKEN_HASH_V;
drop index HFJ_SPIDX_TOKEN.IDX_SP_TOKEN_UPDATED;
create index IDX_SP_TOKEN_RESID_V2 on HFJ_SPIDX_TOKEN(RES_ID, HASH_SYS_AND_VALUE, HASH_VALUE, HASH_SYS, HASH_IDENTITY, PARTITION_ID) WITH (ONLINE = ON);
alter table HFJ_SPIDX_TOKEN drop constraint FK7ULX3J1GG3V7MAQREJGC7YBC4;
drop index HFJ_SPIDX_TOKEN.IDX_SP_TOKEN_RESID;
ALTER TABLE HFJ_SPIDX_TOKEN DROP CONSTRAINT FK7ulx3j1gg3v7maqrejgc7ybc4;
alter table HFJ_SPIDX_TOKEN add constraint FK_SP_TOKEN_RES foreign key (RES_ID) references HFJ_RESOURCE;
-- Table: HFJ_SPIDX_NUMBER create index IDX_SP_NUMBER_HASH_VAL_V2 on HFJ_SPIDX_NUMBER(HASH_IDENTITY, SP_VALUE, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_NUMBER.IDX_SP_NUMBER_HASH_VAL;
create index IDX_SP_NUMBER_RESID_V2 on HFJ_SPIDX_NUMBER(RES_ID, HASH_IDENTITY, SP_VALUE, PARTITION_ID) WITH (ONLINE = ON);
alter table HFJ_SPIDX_NUMBER drop constraint FKCLTIHNC5TGPRJ9BHPT7XI5OTB;
drop index HFJ_SPIDX_NUMBER.IDX_SP_NUMBER_RESID;
ALTER TABLE HFJ_SPIDX_NUMBER DROP CONSTRAINT FKcltihnc5tgprj9bhpt7xi5otb;
alter table HFJ_SPIDX_NUMBER add constraint FK_SP_NUMBER_RES foreign key (RES_ID) references HFJ_RESOURCE;
drop index HFJ_SPIDX_NUMBER.IDX_SP_NUMBER_UPDATED;
-- Table: HFJ_SPIDX_QUANTITY create index IDX_SP_QUANTITY_HASH_V2 on HFJ_SPIDX_QUANTITY(HASH_IDENTITY, SP_VALUE, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_QUANTITY.IDX_SP_QUANTITY_HASH;
create index IDX_SP_QUANTITY_HASH_SYSUN_V2 on HFJ_SPIDX_QUANTITY(HASH_IDENTITY_SYS_UNITS, SP_VALUE, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_QUANTITY.IDX_SP_QUANTITY_HASH_SYSUN;
create index IDX_SP_QUANTITY_HASH_UN_V2 on HFJ_SPIDX_QUANTITY(HASH_IDENTITY_AND_UNITS, SP_VALUE, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_QUANTITY.IDX_SP_QUANTITY_HASH_UN;
create index IDX_SP_QUANTITY_RESID_V2 on HFJ_SPIDX_QUANTITY(RES_ID, HASH_IDENTITY, HASH_IDENTITY_SYS_UNITS, HASH_IDENTITY_AND_UNITS, SP_VALUE, PARTITION_ID) WITH (ONLINE = ON);
alter table HFJ_SPIDX_QUANTITY drop constraint FKN603WJJOI1A6ASEWXBBD78BI5;
drop index HFJ_SPIDX_QUANTITY.IDX_SP_QUANTITY_RESID;
ALTER TABLE HFJ_SPIDX_QUANTITY DROP CONSTRAINT FKn603wjjoi1a6asewxbbd78bi5;
alter table HFJ_SPIDX_QUANTITY add constraint FK_SP_QUANTITY_RES foreign key (RES_ID) references HFJ_RESOURCE;
drop index HFJ_SPIDX_QUANTITY.IDX_SP_QUANTITY_UPDATED;
-- Table: HFJ_SPIDX_QUANTITY_NRML create index IDX_SP_QNTY_NRML_HASH_V2 on HFJ_SPIDX_QUANTITY_NRML(HASH_IDENTITY, SP_VALUE, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_QUANTITY_NRML.IDX_SP_QNTY_NRML_HASH;
create index IDX_SP_QNTY_NRML_HASH_SYSUN_V2 on HFJ_SPIDX_QUANTITY_NRML(HASH_IDENTITY_SYS_UNITS, SP_VALUE, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_QUANTITY_NRML.IDX_SP_QNTY_NRML_HASH_SYSUN;
create index IDX_SP_QNTY_NRML_HASH_UN_V2 on HFJ_SPIDX_QUANTITY_NRML(HASH_IDENTITY_AND_UNITS, SP_VALUE, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_QUANTITY_NRML.IDX_SP_QNTY_NRML_HASH_UN;
create index IDX_SP_QNTY_NRML_RESID_V2 on HFJ_SPIDX_QUANTITY_NRML(RES_ID, HASH_IDENTITY, HASH_IDENTITY_SYS_UNITS, HASH_IDENTITY_AND_UNITS, SP_VALUE, PARTITION_ID) WITH (ONLINE = ON);
alter table HFJ_SPIDX_QUANTITY_NRML drop constraint FKRCJOVMUH5KC0O6FVBLE319PYV;
drop index HFJ_SPIDX_QUANTITY_NRML.IDX_SP_QNTY_NRML_RESID;
ALTER TABLE HFJ_SPIDX_QUANTITY_NRML DROP CONSTRAINT FKrcjovmuh5kc0o6fvble319pyv;
alter table HFJ_SPIDX_QUANTITY_NRML add constraint FK_SP_QUANTITYNM_RES foreign key (RES_ID) references HFJ_RESOURCE;
drop index HFJ_SPIDX_QUANTITY_NRML.IDX_SP_QNTY_NRML_UPDATED;
-- Table: HFJ_RESOURCE create index IDX_RES_TYPE_DEL_UPDATED on HFJ_RESOURCE(RES_TYPE, RES_DELETED_AT, RES_UPDATED, PARTITION_ID, RES_ID) WITH (ONLINE = ON);
drop index HFJ_RESOURCE.IDX_INDEXSTATUS;
drop index HFJ_RESOURCE.IDX_RES_TYPE;
-- Table: HFJ_SPIDX_STRING create index IDX_SP_STRING_HASH_NRM_V2 on HFJ_SPIDX_STRING(HASH_NORM_PREFIX, SP_VALUE_NORMALIZED, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_STRING.IDX_SP_STRING_HASH_NRM;
create index IDX_SP_STRING_HASH_EXCT_V2 on HFJ_SPIDX_STRING(HASH_EXACT, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
drop index HFJ_SPIDX_STRING.IDX_SP_STRING_HASH_EXCT;
drop index HFJ_SPIDX_STRING.IDX_SP_STRING_UPDATED;
-- Table: HFJ_RES_TAG create index IDX_RES_TAG_RES_TAG on HFJ_RES_TAG(RES_ID, TAG_ID, PARTITION_ID) WITH (ONLINE = ON);
create index IDX_RES_TAG_TAG_RES on HFJ_RES_TAG(TAG_ID, RES_ID, PARTITION_ID) WITH (ONLINE = ON);
ALTER TABLE HFJ_RES_TAG DROP CONSTRAINT IDX_RESTAG_TAGID;
drop index IDX_RESTAG_TAGID on HFJ_RES_TAG;
ALTER TABLE HFJ_RES_TAG ADD CONSTRAINT IDX_RESTAG_TAGID UNIQUE (RES_ID, TAG_ID);
-- Table: HFJ_TAG_DEF create index IDX_TAG_DEF_TP_CD_SYS on HFJ_TAG_DEF(TAG_TYPE, TAG_CODE, TAG_SYSTEM, TAG_ID);
ALTER TABLE HFJ_TAG_DEF DROP CONSTRAINT IDX_TAGDEF_TYPESYSCODE;
drop index IDX_TAGDEF_TYPESYSCODE on HFJ_TAG_DEF;
ALTER TABLE HFJ_TAG_DEF ADD CONSTRAINT IDX_TAGDEF_TYPESYSCODE UNIQUE (TAG_TYPE, TAG_CODE, TAG_SYSTEM);
-- Table: HFJ_SPIDX_DATE create index CONCURRENTLY IDX_SP_DATE_HASH_V2 on HFJ_SPIDX_DATE(HASH_IDENTITY, SP_VALUE_LOW, SP_VALUE_HIGH, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_DATE_HASH;
drop index CONCURRENTLY IDX_SP_DATE_HASH_LOW;
create index CONCURRENTLY IDX_SP_DATE_HASH_HIGH_V2 on HFJ_SPIDX_DATE(HASH_IDENTITY, SP_VALUE_HIGH, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_DATE_HASH_HIGH;
create index CONCURRENTLY IDX_SP_DATE_ORD_HASH_V2 on HFJ_SPIDX_DATE(HASH_IDENTITY, SP_VALUE_LOW_DATE_ORDINAL, SP_VALUE_HIGH_DATE_ORDINAL, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_DATE_ORD_HASH;
create index CONCURRENTLY IDX_SP_DATE_ORD_HASH_HIGH_V2 on HFJ_SPIDX_DATE(HASH_IDENTITY, SP_VALUE_HIGH_DATE_ORDINAL, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_DATE_ORD_HASH_LOW;
create index CONCURRENTLY IDX_SP_DATE_RESID_V2 on HFJ_SPIDX_DATE(RES_ID, HASH_IDENTITY, SP_VALUE_LOW, SP_VALUE_HIGH, SP_VALUE_LOW_DATE_ORDINAL, SP_VALUE_HIGH_DATE_ORDINAL, PARTITION_ID);
alter table HFJ_SPIDX_DATE drop constraint FK17S70OA59RM9N61K9THJQRSQM;
drop index CONCURRENTLY IDX_SP_DATE_RESID;
alter table HFJ_SPIDX_DATE add constraint FK_SP_DATE_RES foreign key (RES_ID) references HFJ_RESOURCE;
drop index CONCURRENTLY IDX_SP_DATE_UPDATED;
-- Table: HFJ_SPIDX_TOKEN create index CONCURRENTLY IDX_SP_TOKEN_HASH_V2 on HFJ_SPIDX_TOKEN(HASH_IDENTITY, SP_SYSTEM, SP_VALUE, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_TOKEN_HASH;
create index CONCURRENTLY IDX_SP_TOKEN_HASH_S_V2 on HFJ_SPIDX_TOKEN(HASH_SYS, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_TOKEN_HASH_S;
create index CONCURRENTLY IDX_SP_TOKEN_HASH_SV_V2 on HFJ_SPIDX_TOKEN(HASH_SYS_AND_VALUE, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_TOKEN_HASH_SV;
create index CONCURRENTLY IDX_SP_TOKEN_HASH_V_V2 on HFJ_SPIDX_TOKEN(HASH_VALUE, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_TOKEN_HASH_V;
drop index CONCURRENTLY IDX_SP_TOKEN_UPDATED;
create index CONCURRENTLY IDX_SP_TOKEN_RESID_V2 on HFJ_SPIDX_TOKEN(RES_ID, HASH_SYS_AND_VALUE, HASH_VALUE, HASH_SYS, HASH_IDENTITY, PARTITION_ID);
alter table HFJ_SPIDX_TOKEN drop constraint FK7ULX3J1GG3V7MAQREJGC7YBC4;
drop index CONCURRENTLY IDX_SP_TOKEN_RESID;
alter table HFJ_SPIDX_TOKEN add constraint FK_SP_TOKEN_RES foreign key (RES_ID) references HFJ_RESOURCE;
-- Table: HFJ_SPIDX_NUMBER create index CONCURRENTLY IDX_SP_NUMBER_HASH_VAL_V2 on HFJ_SPIDX_NUMBER(HASH_IDENTITY, SP_VALUE, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_NUMBER_HASH_VAL;
create index CONCURRENTLY IDX_SP_NUMBER_RESID_V2 on HFJ_SPIDX_NUMBER(RES_ID, HASH_IDENTITY, SP_VALUE, PARTITION_ID);
alter table HFJ_SPIDX_NUMBER drop constraint FKCLTIHNC5TGPRJ9BHPT7XI5OTB;
drop index CONCURRENTLY IDX_SP_NUMBER_RESID;
alter table HFJ_SPIDX_NUMBER add constraint FK_SP_NUMBER_RES foreign key (RES_ID) references HFJ_RESOURCE;
drop index CONCURRENTLY IDX_SP_NUMBER_UPDATED;
-- Table: HFJ_SPIDX_QUANTITY create index CONCURRENTLY IDX_SP_QUANTITY_HASH_V2 on HFJ_SPIDX_QUANTITY(HASH_IDENTITY, SP_VALUE, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_QUANTITY_HASH;
create index CONCURRENTLY IDX_SP_QUANTITY_HASH_SYSUN_V2 on HFJ_SPIDX_QUANTITY(HASH_IDENTITY_SYS_UNITS, SP_VALUE, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_QUANTITY_HASH_SYSUN;
create index CONCURRENTLY IDX_SP_QUANTITY_HASH_UN_V2 on HFJ_SPIDX_QUANTITY(HASH_IDENTITY_AND_UNITS, SP_VALUE, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_QUANTITY_HASH_UN;
create index CONCURRENTLY IDX_SP_QUANTITY_RESID_V2 on HFJ_SPIDX_QUANTITY(RES_ID, HASH_IDENTITY, HASH_IDENTITY_SYS_UNITS, HASH_IDENTITY_AND_UNITS, SP_VALUE, PARTITION_ID);
alter table HFJ_SPIDX_QUANTITY drop constraint FKN603WJJOI1A6ASEWXBBD78BI5;
drop index CONCURRENTLY IDX_SP_QUANTITY_RESID;
alter table HFJ_SPIDX_QUANTITY add constraint FK_SP_QUANTITY_RES foreign key (RES_ID) references HFJ_RESOURCE;
drop index CONCURRENTLY IDX_SP_QUANTITY_UPDATED;
-- Table: HFJ_SPIDX_QUANTITY_NRML create index CONCURRENTLY IDX_SP_QNTY_NRML_HASH_V2 on HFJ_SPIDX_QUANTITY_NRML(HASH_IDENTITY, SP_VALUE, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_QNTY_NRML_HASH;
create index CONCURRENTLY IDX_SP_QNTY_NRML_HASH_SYSUN_V2 on HFJ_SPIDX_QUANTITY_NRML(HASH_IDENTITY_SYS_UNITS, SP_VALUE, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_QNTY_NRML_HASH_SYSUN;
create index CONCURRENTLY IDX_SP_QNTY_NRML_HASH_UN_V2 on HFJ_SPIDX_QUANTITY_NRML(HASH_IDENTITY_AND_UNITS, SP_VALUE, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_QNTY_NRML_HASH_UN;
create index CONCURRENTLY IDX_SP_QNTY_NRML_RESID_V2 on HFJ_SPIDX_QUANTITY_NRML(RES_ID, HASH_IDENTITY, HASH_IDENTITY_SYS_UNITS, HASH_IDENTITY_AND_UNITS, SP_VALUE, PARTITION_ID);
alter table HFJ_SPIDX_QUANTITY_NRML drop constraint FKRCJOVMUH5KC0O6FVBLE319PYV;
drop index CONCURRENTLY IDX_SP_QNTY_NRML_RESID;
alter table HFJ_SPIDX_QUANTITY_NRML add constraint FK_SP_QUANTITYNM_RES foreign key (RES_ID) references HFJ_RESOURCE;
drop index CONCURRENTLY IDX_SP_QNTY_NRML_UPDATED;
-- Table: HFJ_RESOURCE drop index IDX_INDEXSTATUS;
create index CONCURRENTLY IDX_RES_TYPE_DEL_UPDATED on HFJ_RESOURCE(RES_TYPE, RES_DELETED_AT, RES_UPDATED, PARTITION_ID, RES_ID);
drop index CONCURRENTLY IDX_RES_TYPE;
-- Table: HFJ_SPIDX_STRING create index CONCURRENTLY IDX_SP_STRING_HASH_NRM_V2 on HFJ_SPIDX_STRING(HASH_NORM_PREFIX, SP_VALUE_NORMALIZED, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_STRING_HASH_NRM;
create index CONCURRENTLY IDX_SP_STRING_HASH_EXCT_V2 on HFJ_SPIDX_STRING(HASH_EXACT, RES_ID, PARTITION_ID);
drop index CONCURRENTLY IDX_SP_STRING_HASH_EXCT;
drop index CONCURRENTLY IDX_SP_STRING_UPDATED;
-- Table: HFJ_RES_TAG create index CONCURRENTLY IDX_RES_TAG_RES_TAG on HFJ_RES_TAG(RES_ID, TAG_ID, PARTITION_ID);
create index CONCURRENTLY IDX_RES_TAG_TAG_RES on HFJ_RES_TAG(TAG_ID, RES_ID, PARTITION_ID);
alter table HFJ_RES_TAG drop constraint if exists IDX_RESTAG_TAGID cascade;
drop index if exists IDX_RESTAG_TAGID cascade;
ALTER TABLE HFJ_RES_TAG ADD CONSTRAINT IDX_RESTAG_TAGID UNIQUE (RES_ID, TAG_ID);
-- Table: HFJ_TAG_DEF create index IDX_TAG_DEF_TP_CD_SYS on HFJ_TAG_DEF(TAG_TYPE, TAG_CODE, TAG_SYSTEM, TAG_ID);
alter table HFJ_TAG_DEF drop constraint if exists IDX_TAGDEF_TYPESYSCODE cascade;
drop index if exists IDX_TAGDEF_TYPESYSCODE cascade;
ALTER TABLE HFJ_TAG_DEF ADD CONSTRAINT IDX_TAGDEF_TYPESYSCODE UNIQUE (TAG_TYPE, TAG_CODE, TAG_SYSTEM);
-- Table: HFJ_SPIDX_DATE create index IDX_SP_DATE_HASH_V2 on HFJ_SPIDX_DATE(HASH_IDENTITY, SP_VALUE_LOW, SP_VALUE_HIGH, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_DATE_HASH ONLINE;
drop index IDX_SP_DATE_HASH_LOW ONLINE;
create index IDX_SP_DATE_HASH_HIGH_V2 on HFJ_SPIDX_DATE(HASH_IDENTITY, SP_VALUE_HIGH, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_DATE_HASH_HIGH ONLINE;
create index IDX_SP_DATE_ORD_HASH_V2 on HFJ_SPIDX_DATE(HASH_IDENTITY, SP_VALUE_LOW_DATE_ORDINAL, SP_VALUE_HIGH_DATE_ORDINAL, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_DATE_ORD_HASH ONLINE;
create index IDX_SP_DATE_ORD_HASH_HIGH_V2 on HFJ_SPIDX_DATE(HASH_IDENTITY, SP_VALUE_HIGH_DATE_ORDINAL, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_DATE_ORD_HASH_LOW ONLINE;
create index IDX_SP_DATE_RESID_V2 on HFJ_SPIDX_DATE(RES_ID, HASH_IDENTITY, SP_VALUE_LOW, SP_VALUE_HIGH, SP_VALUE_LOW_DATE_ORDINAL, SP_VALUE_HIGH_DATE_ORDINAL, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
alter table HFJ_SPIDX_DATE drop constraint FK17S70OA59RM9N61K9THJQRSQM;
drop index IDX_SP_DATE_RESID ONLINE;
alter table HFJ_SPIDX_DATE add constraint FK_SP_DATE_RES foreign key (RES_ID) references HFJ_RESOURCE;
drop index IDX_SP_DATE_UPDATED ONLINE;
-- Table: HFJ_SPIDX_TOKEN create index IDX_SP_TOKEN_HASH_V2 on HFJ_SPIDX_TOKEN(HASH_IDENTITY, SP_SYSTEM, SP_VALUE, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_TOKEN_HASH ONLINE;
create index IDX_SP_TOKEN_HASH_S_V2 on HFJ_SPIDX_TOKEN(HASH_SYS, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_TOKEN_HASH_S ONLINE;
create index IDX_SP_TOKEN_HASH_SV_V2 on HFJ_SPIDX_TOKEN(HASH_SYS_AND_VALUE, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_TOKEN_HASH_SV ONLINE;
create index IDX_SP_TOKEN_HASH_V_V2 on HFJ_SPIDX_TOKEN(HASH_VALUE, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_TOKEN_HASH_V ONLINE;
drop index IDX_SP_TOKEN_UPDATED ONLINE;
create index IDX_SP_TOKEN_RESID_V2 on HFJ_SPIDX_TOKEN(RES_ID, HASH_SYS_AND_VALUE, HASH_VALUE, HASH_SYS, HASH_IDENTITY, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
alter table HFJ_SPIDX_TOKEN drop constraint FK7ULX3J1GG3V7MAQREJGC7YBC4;
drop index IDX_SP_TOKEN_RESID ONLINE;
alter table HFJ_SPIDX_TOKEN add constraint FK_SP_TOKEN_RES foreign key (RES_ID) references HFJ_RESOURCE;
-- Table: HFJ_SPIDX_NUMBER create index IDX_SP_NUMBER_HASH_VAL_V2 on HFJ_SPIDX_NUMBER(HASH_IDENTITY, SP_VALUE, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_NUMBER_HASH_VAL ONLINE;
create index IDX_SP_NUMBER_RESID_V2 on HFJ_SPIDX_NUMBER(RES_ID, HASH_IDENTITY, SP_VALUE, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
alter table HFJ_SPIDX_NUMBER drop constraint FKCLTIHNC5TGPRJ9BHPT7XI5OTB;
drop index IDX_SP_NUMBER_RESID ONLINE;
alter table HFJ_SPIDX_NUMBER add constraint FK_SP_NUMBER_RES foreign key (RES_ID) references HFJ_RESOURCE;
drop index IDX_SP_NUMBER_UPDATED ONLINE;
-- Table: HFJ_SPIDX_QUANTITY create index IDX_SP_QUANTITY_HASH_V2 on HFJ_SPIDX_QUANTITY(HASH_IDENTITY, SP_VALUE, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_QUANTITY_HASH ONLINE;
create index IDX_SP_QUANTITY_HASH_SYSUN_V2 on HFJ_SPIDX_QUANTITY(HASH_IDENTITY_SYS_UNITS, SP_VALUE, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_QUANTITY_HASH_SYSUN ONLINE;
create index IDX_SP_QUANTITY_HASH_UN_V2 on HFJ_SPIDX_QUANTITY(HASH_IDENTITY_AND_UNITS, SP_VALUE, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_QUANTITY_HASH_UN ONLINE;
create index IDX_SP_QUANTITY_RESID_V2 on HFJ_SPIDX_QUANTITY(RES_ID, HASH_IDENTITY, HASH_IDENTITY_SYS_UNITS, HASH_IDENTITY_AND_UNITS, SP_VALUE, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
alter table HFJ_SPIDX_QUANTITY drop constraint FKN603WJJOI1A6ASEWXBBD78BI5;
drop index IDX_SP_QUANTITY_RESID ONLINE;
alter table HFJ_SPIDX_QUANTITY add constraint FK_SP_QUANTITY_RES foreign key (RES_ID) references HFJ_RESOURCE;
drop index IDX_SP_QUANTITY_UPDATED ONLINE;
-- Table: HFJ_SPIDX_QUANTITY_NRML create index IDX_SP_QNTY_NRML_HASH_V2 on HFJ_SPIDX_QUANTITY_NRML(HASH_IDENTITY, SP_VALUE, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_QNTY_NRML_HASH ONLINE;
create index IDX_SP_QNTY_NRML_HASH_SYSUN_V2 on HFJ_SPIDX_QUANTITY_NRML(HASH_IDENTITY_SYS_UNITS, SP_VALUE, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_QNTY_NRML_HASH_SYSUN ONLINE;
create index IDX_SP_QNTY_NRML_HASH_UN_V2 on HFJ_SPIDX_QUANTITY_NRML(HASH_IDENTITY_AND_UNITS, SP_VALUE, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_QNTY_NRML_HASH_UN ONLINE;
create index IDX_SP_QNTY_NRML_RESID_V2 on HFJ_SPIDX_QUANTITY_NRML(RES_ID, HASH_IDENTITY, HASH_IDENTITY_SYS_UNITS, HASH_IDENTITY_AND_UNITS, SP_VALUE, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
alter table HFJ_SPIDX_QUANTITY_NRML drop constraint FKRCJOVMUH5KC0O6FVBLE319PYV;
drop index IDX_SP_QNTY_NRML_RESID ONLINE;
alter table HFJ_SPIDX_QUANTITY_NRML add constraint FK_SP_QUANTITYNM_RES foreign key (RES_ID) references HFJ_RESOURCE;
drop index IDX_SP_QNTY_NRML_UPDATED ONLINE;
-- Table: HFJ_RESOURCE drop index IDX_INDEXSTATUS;
create index IDX_RES_TYPE_DEL_UPDATED on HFJ_RESOURCE(RES_TYPE, RES_DELETED_AT, RES_UPDATED, PARTITION_ID, RES_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_RES_TYPE ONLINE;
-- Table: HFJ_SPIDX_STRING create index IDX_SP_STRING_HASH_NRM_V2 on HFJ_SPIDX_STRING(HASH_NORM_PREFIX, SP_VALUE_NORMALIZED, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_STRING_HASH_NRM ONLINE;
create index IDX_SP_STRING_HASH_EXCT_V2 on HFJ_SPIDX_STRING(HASH_EXACT, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
drop index IDX_SP_STRING_HASH_EXCT ONLINE;
drop index IDX_SP_STRING_UPDATED ONLINE;
-- Table: HFJ_RES_TAG create index IDX_RES_TAG_RES_TAG on HFJ_RES_TAG(RES_ID, TAG_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
create index IDX_RES_TAG_TAG_RES on HFJ_RES_TAG(TAG_ID, RES_ID, PARTITION_ID) ONLINE DEFERRED INVALIDATION;
ALTER TABLE HFJ_RES_TAG DROP CONSTRAINT IDX_RESTAG_TAGID;
drop index IDX_RESTAG_TAGID;
ALTER TABLE HFJ_RES_TAG ADD CONSTRAINT IDX_RESTAG_TAGID UNIQUE (RES_ID, TAG_ID);
-- Table: HFJ_TAG_DEF create index IDX_TAG_DEF_TP_CD_SYS on HFJ_TAG_DEF(TAG_TYPE, TAG_CODE, TAG_SYSTEM, TAG_ID);
ALTER TABLE HFJ_TAG_DEF DROP CONSTRAINT IDX_TAGDEF_TYPESYSCODE;
drop index IDX_TAGDEF_TYPESYSCODE;
ALTER TABLE HFJ_TAG_DEF ADD CONSTRAINT IDX_TAGDEF_TYPESYSCODE UNIQUE (TAG_TYPE, TAG_CODE, TAG_SYSTEM);
The version of a few dependencies have been bumped to the latest versions (dependent HAPI modules listed in brackets):
The HAPI FHIR server GraphQL endpoints now support GraphQL introspection, making them much easier to use with GraphQL-capable IDEs.
SearchNarrowingInterceptor can now be used to automatically narrow searches to include a code:in or code:not-in expression, for mandating that results must be in a specified list of codes.
Performance for JPA Server ValueSet expansion has been significantly optimized in order to minimize database lookups, especially with large expansions.
Support has been added to the JPA server for token :not-in queries. Similar to :not queries, resources will currently be considered to match if any codes in the relevant resource field are not found in the given ValueSet (as opposed to matching if all codes are not in the given ValueSet).
The SearchNarrowingInterceptor can now narrow searches to require a token:in or token:not-in parameter.
Added logs that identify the resource that failed the validation check during package installation, and describe the reason for the failure.
A new batch operation framework for executing long running background jobs has been created. This new framework is called 'Batch2', and will eventually replace Spring Batch. This framework is intended to be much more resilient to failures as well as much more paralellized than Spring Batch jobs.
Support has now (finally!) been added for the FHIR Bulk Import ($import) operation. This operation is the first operation to leverage the new Batch2 framework.
A consent service implementation that enforces search narrowing rules specified by the SearchNarrowingInterceptor has been added. This can be used to narrow results from searches in cases where this can't be done using search URL manipulation. See ResultSet Narrowing for more information.
When validating a code, a helpful error message is now returned if the user has supplied a ValueSet URL where a CodeSystem URL should be used, since this is a common mistake.
It is now possible to register multiple IConsentService implementations against a single ConsentInterceptor.
Previously there was no way to recreate freetext indexes for terminology entities. A new CLI operation, reindex-terminology now exists for this purpose.
The JPA server terminology service can now process IS-A filters in ValueSet expansion on servers with Hibernate Search disabled.
When using ApacheProxyAddressStrategy as a server address strategy, the Forward header will now be respected. Thanks to Thomas Papke for the pull request!
Added a helper method to BundleUtil, to support the creation of a new bundle entry and to set the value for one of the fields.
Added a new multi-column index on the HFJ_RESOURCE table indexing the columns RES_TYPE, RES_DELETED_AT, RES_UPDATED, PARTITION_ID and RES_ID and removed the existing single-column index on the RES_TYPE column. This new index will improve the performance of the $reindex operation and will be useful for some other queries a well.
Added a new pointcut: STORAGE_PRESTORAGE_CLIENT_ASSIGNED_ID which is invoked when a user attempts to create a resource with a client-assigned ID.
Group Bulk Export (e.g. Group/123/$export) now additionally supports Organization and Practitioner as valid _type parameters. This works internally by querying using a _has parameter
Added search parameter modifier :nickname that can be used with 'name' or 'given' search parameters. E.g. ?Patient?given:nickname=Kenny will match a patient with the given name Kenneth. Also added MDM matching algorithm named NICKNAME that matches based this.
The resource JSON can now be stored and retrieved in the Lucene/Elasticsearch index. This enables some queries to provide results without using the database. This is enabled via DaoConfig.setStoreResourceInLuceneIndex()
When using JPA persistence with Hibernate Search (Lucene or Elasticsearch), simple FHIR queries that can be satisfied completely by Hibernate Search no longer query the database. Before, every search involved the database, even when not needed. This is faster when there are many results.
When updating or reindexing a resource on a JPA server where indexing search param presence is enabled (i.e. support for the :missing modifier), updates will try to reuse existing database rows where possible instead of deleting one row and adding another. This should cause a slight performance boost systems with this feature enabled.
Reverting the change to treat canonical references as local when setting the flag getTreatBaseUrlsAsLocal(). This was added to address a limitation in _include that did not process references by canonical URLs. The _include search operation now includes canonical references without special configuration.
Previously, during RepositoryValidation, we would perform validation on resources created via DaoConfig's setAutoCreatePlaceholderReferenceTargets property. This caused validation failures on placeholder resources as they do not conform to any profile. This has been changed, and Repository Validation will not occur on any resource containing an extension with URL http://hapifhir.io/fhir/StructureDefinition/resource-placeholder`.
Changing the MDM logging to contain scores for each applied matcher field. Deleting summary score when creating MDM link.
Several classes and interfaces related to the $expunge operation have moved from the hapi-fhir-jpaserver-base project to the hapi-fhir-storage project.
Ensure that migration steps do not timeout. Adding an index, and other migration steps can take exceed the default connection timeout. This has been changed - migration steps now have no timeout and will run until completion.
Method signatures on several interfaces and classes related to the $expunge operation have changed to support the case where the primary identifier of a resource is not necessarily a Long.
Modified operation method binding so that later registered custom providers take precedence over earlier registered ones. In particular, this means that custom operations can now override built-in operations.
The normalized and exact string database indexes have been changed to provide faster string searches.
Hibernate search has been updated to 6.1.4, which adds support for Elasticsearch 7.16.X. Previous installations running on 7.10.X will continue to work normally, and this change does not require you to upgrade your elasticsearch service.
The XML and JSON Parsers are now encoding narratives of contained resources. Narratives were skipped before for contained resources. This was a rule which was only valid in STU1 (https://www.hl7.org/fhir/DSTU1/narrative.html), and was removed in DSTU2+
A regression in HAPI FHIR 5.5.0 meant that very large transactions where the bundle contained over 1000 distinct client-assigned resource IDs could fail on MSSQL and Oracle due to SQL parameter count limitations.
An occasional concurrency failure in AuthorizationInterceptor has been resolved. Thanks to Martin Visser for reporting and providing a reproducible test case!
When performing a search with a _revinclude. the results sometimes incorrectly included resources that were reverse included by other search parameters with the same name. Thanks to GitHub user @vivektk84 for reporting and to Jean-Francois Briere for proposing a fix.
When searching for date search parameters on Postgres, ordinals could sometimes be represented as strings, causing a search failure. This has been corrected.
A race condition in the Subscription processor meant that a Subscription could fail to register right away if it was modified immediately after being created. Note that this issue only affects rapid changes, and only caused the subscription to be unregistered for a maximum of one minute after its initial creation so the impact of this issue is expected to be low.
When cross-partition reference Mode is used, the rest-hook subscriptions on a partition enabled server would cause a NPE. Cause of this is from the reloading of the subscription when the server is restarted. This issue has been fixed. Also fixed issue with revinclude for rest-hook subscription not working.
User was permitted to bulk export all groups/patients when they were unauthorized. This issue has been fixed.
GET resource with _total=accurate and _summary=count if consent service enabled should throw an InvalidRequestException. This issue has been fixed.
ValueSet pre-expansion was failing when number of concepts was larger than configured BooleanQuery.maxClauseCount value (default is 1024). This is now fixed.
Fixed a bug when searching for the resource persistent id in the partitioned environment. This happens when the Cross-Partition Reference Mode is set to ALLOWED_UNQUALIFIED and it is searching for the wrong partition id and resulted in resource persistent id not found.
A regression in HAPI FHIR 5.7.0 meant that when UnknownCodeSystemWarningValidationSupport was configured for WARNING behaviour, validating a field with an implicit code system could incorrectly result in an error.
Reindexing jobs were not respected the passed in date range in SearchParams. We now take date these date ranges into account when running re-indexing jobs.
Previously, or expressions were not being properly validated in FHIRPath in STU3 due to a bug with expression path splitting. This has been corrected.
When doing package upload, all resources were filtered by status=active. That is inappropriate for some types. For Subscription, we need to check if the value is requested, for DocumentReference and Communication other values outside the expected active and not active also exist. This change adds a check for those types of resources other than the one normally done for active matching value.
Previously the Fhir parser would only set the resource type on the resource ID for resources which implemented IDomainResource. This caused a few resource types to be missed. This has been corrected, and resource type is now set on the id element for all IBaseResource instances instead.
Previously, allowing any client assigned ID by setting ClientIdStrategyEnum to ANY threw an exception when creating CodeSystem resources. This has been corrected.
Supporting expansion of ValueSet include/exclude system URI expressed in canonical format during validation.
Previously if partitioning was enabled, the URL returned from $export-poll-status provided a location in the wrong partition. This has been fixed and all Bulk Exports will be stored in the DEFAULT partition.
TerserUtil.mergeAllFields() threw an exception when cloning contained resources. This has been corrected.
Added a new setting to BinaryStorageInterceptor which allows you to disable binary de-externalization during resource reads.
Previously, Bulk export jobs automatically cleared the collection after a hardcoded 2 hour time period from start of the job. This is now configurable via a new DaoConfig property, setBulkExportFileRetentionPeriodHours(). If this is set to a value of 0 or below, then the collections will never be expired.
Mdm was not excluding NO_MATCH from golden-resource candidates in eid mode. This caused mdm to produce an error when a Patient eid is changed after that patient's link was updated to NO_MATCH. This has been corrected
Unmodified string searches and string :contains search were incorrectly case-sensitive when configured with recent versions of Lucene/Elasticsearch. This has been corrected.
While converting the reindexing job to the new batch framework, a regression of #3441 was introduced. Reindexing jobs were not respecting the passed in _lastUpdated parameter. We now take date these date ranges into account when running re-indexing jobs.
When searching with the _lastUpdated parameter and using the ne prefix, search would fail with HAPI-1928 error. This has been fixed.
Command-line log output now only sends colour commands if output is being printed to a console. Otherwise, (e.g. if output is redirected to a file) the log output will not contain any special colour escape characters.
New batch job implementation (batch2) were staying on IN_PROGRESS status after being cancelled. That is now fixed. After cancellation status is changed to CANCELLED.
Previously, it was possible to update a resource with wrong tenantID. This issue has been fixed.
The Spring Framework library was upgraded to version 5.3.18 in order to avoid depending on a version known to be vulnerable to CVE-2022-22965, known as Spring4Shell. HAPI FHIR is not believed to be vulnerable to this issue, but the library has been bumped as a precaution.
Released: 2022年07月07日
Codename: (Sojourner)
Released: 2022年07月07日
Codename: (Sojourner)
A regression in HAPI FHIR 5.5.0 meant that very large transactions where the bundle contained over 1000 distinct client-assigned resource IDs could fail on MSSQL and Oracle due to SQL parameter count limitations.
Released: 2022年06月03日
Codename: (Sojourner)
Previously subscriptions in a partition with the id null will be matched against incoming resources from all partitions. Changed to subscriptions will only match against incoming resources in the partition the subscription exists in unless cross partition subscription is enabled and the subscription has the appropriate extension.
Released: 2022年05月30日
Codename: (Sojourner)
This version specifically modifies reindex to support moving data from the RES_TEXT to the RES_TEXT_VC column in the HFJ_RES_VER table. This is especially important for PostgreSQL users, as the RES_TEXT column only has an addressable space of about 4 billion resources.
Any installation that exceeds this amount of resources stored in the RES_TEXT will experience that the software hangs on attempting to store new resources. In order to avoid this, you should use the DaoConfig#setInlineResourceTextBelowSize setting, and set it to a large non-zero value. This will cause PostgreSQL to not store the resource text as a LOB, but instead as a VARCHAR field. By default, this field has length 4000, but you can and should update it by following the documentation here.
UPDATE: This change has breen removed. Previous content was: 'Modify reindexing to migrate data HFJ_RES_VER data from the RES_TEXT column to the RES_TEXT_VC if the resource's size falls inside of the configuration defined in DaoConfig's getInlineResourceTextBelowSize property.'
Released: 2022年03月31日
Codename: (Sojourner)
Released: 2022年03月09日
Codename: (Sojourner)
Previously, during RepositoryValidation, we would perform validation on resources created via DaoConfig's setAutoCreatePlaceholderReferenceTargets property. This caused validation failures on placeholder resources as they do not conform to any profile. This has been changed, and Repository Validation will not occur on any resource containing an extension with URL http://hapifhir.io/fhir/StructureDefinition/resource-placeholder`.
Released: 2022年02月17日
Codename: (Sojourner)
The version of a few dependencies have been bumped to the latest versions (dependent HAPI modules listed in brackets):
In the JPA server, the token :of-type modifier is now supported. This is an optional feature and must be explicitly enabled (default is disabled).
Added partition support for subscriptions. Subscriptions will now only match resource from the same partition
New configuration option added to validate bundle resources concurrently. Also new configuration added to skip validation of contained resources.
Added support for cross-partition subscriptions. Subscription in the default partition can now listen to resource changes from all partitions
The resource contents are optionally added to the lucene index so $lastn queries can be satisfied without database access. This is activated by the DaoConfig 'StoreResourceInLuceneIndex' property.
Added http://hapifhir.io/fhir/StructureDefinition/subscription-delivery-retry-count extension that can be provided to a subscription to define a specific retry strategy (retry retry-count number of times before giving up).
Implement support for $member-match operation by coverage-id or coverage-identifier. (Beneficiary demographic matching not supported)
Add the concept of messageKey to the BaseResourceMessage class. This is to support brokers which permit key-based partitioning.
Added ability to specify max code lengths for supported Phonetic Encoders (Metaphone, Double_Metaphone). To specify max code length, append the code length to the searchparameter-phonetic-encoder extension in brackets after the encoder type. eg: { "url": "http://hapifhir.io/fhir/StructureDefinition/searchparameter-phonetic-encoder", "valueString": "METAPHONE(5)" }
Added ability to load Implementation Guide packages from filesystem by supporting the file:/ syntax of url.
A number of minor optimizations have been added to the JsonParser serializer module as well as to the transaction processor. These optimizations lead to a significant performance improvement when processing large transaction bundles (i.e. transaction bundles containing a larger number of entries).
A new configuration option has been added to ParserOptions called setAutoContainReferenceTargetsWithNoId. This setting disables the automatic creation of contained resources in cases where the target/contained resource is set to a Reference instance but not actually added to the Resource.contained list. Disabling this automatic containing yields a minor performance improvement in systems that do not need this functionality.
Improved validation performance by switching validation serialization from XML to JSON.
Significantly improved $delete-expunge performance by adding database indexes, and filtering needed foreign keys to delete by resource type.
A new JPA setting has been added to DaoConfig settings called Inline Resource Text Below Size. When this setting is set to a positive value, any resources with a total serialized length smaller than the given number will be stored as an inline VARCHAR2 text value on the HFJ_RES_VER table, instead of using an external LOB column. This improves read/write performance (often by a significant amount) at the expense of a slightly larger amount of disk usage.
Improves the performance of the query for searching by chained search parameter when the Index Contained Resources feature is enabled.
Previously in configuring MDM, you were only allowed to set a single eidSystem which was global regardless of how many resource types you were performing MDM on. This has been changed. That field is now deprecated, and a new field called eidSystems (a json object) should be used instead. This will allow you to have granular control over which resource types use which EIDs. See the documentation of eidSystems for more information.
Fixed a regression where packages would fail to load if HAPI-FHIR was operating in DATABASE binary storage mode.
These changes are related to bumping the dependency on org.hl7.fhir.core to version 5.3.0. The main updates that resulted in the majority of the changes in this PR were the addition of the interface IValidationPolicyAdvisor, and the refactoring of some of the main validation classes (for cleaning purposes). Also, many of the error messages were updated, so the resulting tests in HAPI had to be modified to reflect these message changes. In regard to the new interface IValidationPolicyAdvisor, by implementing this interface and passing it through the validation context in HAPI to the InstanceValidator in the core libraries, users can now control the behavior of the validator when validating references, contained resources, and coded content. Previously, only references were controlled in this way, and users controlled this by overriding the validation fetcher. Test cases were added in the validation test cases repository to demonstrate it's functionality. In particular, the two test cases contained-resource-bad-id, and contained-resource-bad-id-ignore. The definitions and expected outcomes of these test cases are in the manifest.json in that repository.
Add email configuration to MailSvc constructor
Removed the following modules from the HAPI-FHIR project: hapi-fhir-testpage-interceptor, hapi-fhir-structures-dstu, hapi-fhir-oauth2, hapi-fhir-narrativegenerator.
Calling the $document operation previously omitted the fullUrl of the bundle entries. This has been corrected.
When a subscription fails to activate (either on the first attempt or on start up), hapi-fhir will log the exception, set the subscription to ERROR state, and continue on. This is to prevent hanging on startup for errors that cannot be resolved by infinite retries.
$member-match operation was allowed to be invoked by GET when it shouldn't have. That is fixed. Operation can only be invoked by POST request
Using the delivery option: Payload Search Result Mode for rest-rook subscriptions on a partition enabled server would cause a NPE. This issue has been fixed.
Users were able to access data on partitions they did not have access to by using pagination links generated by users who did have access.
Code System deletion background tasks were taking over a day to complete on very large CodeSystems for PostgreSQL, SQL Server and Oracle databases. That was improved now taking less than an hour in all three platforms
RequestValidatingInteceptor incorrectly prevented GraphQL requests from being submitted using the HTTP POST form of the GraphQL operation.
Updated UnknownCodeSystemWarningValidationSupport to allow the throwing of warnings if configured to do so.
Fixed race condition in BaseHapiFhirDao that resulted in exceptions being thrown when multiple resources in a single bundle transaction had the same tags.
Resource links were previously not being consistently created in cases where references were versioned and pointing to recently auto-created placeholder resources.
Fixed language code validation so that it is case insensitive (eg, en-US, en-us, EN-US, EN-us should all work)
Fixed a serious performance issue with the $reindex operation.
In servers with a large number of StructureDefinition resources loaded, occasionally a call for the CapabilityStatement could take a very long time to return. This has been corrected.
$everything operation returns a 500 error when querying for a page when _getpagesoffset is greater than or equal to 300. This has been corrected.
MDM was throwing a NullPointerException when upgrading a match from POSSIBLE_MATCH to MATCH. This has been corrected.
Fixed bug that caused queries that checked Elasticsearch when setting _total=accurate searches to return all values when using _content or _text parameters
Terminology reindexing job was launching the wrong process, so terminology reindexing was never launched. This has been fixed.
Concurrent validation failed with StringIndexOutOfBoundsException when the validation location did not contain a '.'. This has been corrected.
Transactions with entries that have request.url values that are fully qualified urls will now be rejected.
In rare cases where patient ID String happened to have hashCode equal to Integer.MIN_VALUE, PatientIdPartitionInterceptor would generate an invalid partition ID. This has been fixed.
When $mdm-clear operation batch was split into multiple threads, ResourceVersionConflictExceptions were being thrown. This issue has been fixed.
Conformance validation should happen once per client-endpoint, no matter the number of threads executing the requests, but was happening once per client-endpoint-thread. This is now fixed.
Previously, $binary-access-read and other operations that return neither method outcomes nor resources were causing no invocation of the SERVER_OUTGOING_RESPONSE pointcut. This has been corrected, and those operations will now correctly invoke SERVER_OUTGOING_RESPONSE.
GraphQL searches cannot be executed when number of matching results exceeds default page size. This has been corrected.
In HAPI FHIR 5.6.0, a regression meant that JPA server FHIR transactions containing two identical conditional operations (for example, a transaction with two conditional create operations where the conditional URL was identical for both) could result in resources being saved in an inconsistent state. This scenario will now be result in the server returning an HTTP 400 error instead. Note that there is no meaning defined in the FHIR specification for duplicate entries in a FHIR transaction bundle, so it has never been recommended to do this.
The JPA server will now validate _include and _revinclude statements before beginning search processing. This prevents an ugly JPA error when some invalid params are used.
Enhanced Lucene Indexing failed when indexing contained resources. Contained resources are not indexed in Lucene/Elasticsearch, but this no longer causes an exception.
Chained searches will handle common search parameters correctly when the Index Contained Resources configuration parameter is enabled.
When cross-partition reference Mode is used, the rest-hook subscriptions on a partition enabled server would cause a NPE. Cause of this is from the reloading of the subscription when the server is restarted. This issue has been fixed. Also fixed issue with revinclude for rest-hook subscription not working.
In the JPA server, searching using the token :text modifier will no longer index and match values in Identifier.type.text. Previously we indexed this element, but the FHIR specification does not actually show this field as being indexed for this modifier, and intuitively it doesn't make sense to do so.
Released: 2022年07月07日
Codename: (Raccoon)
An occasional concurrency failure in AuthorizationInterceptor has been resolved. Thanks to Martin Visser for reporting and providing a reproducible test case!
Released: 2022年03月31日
Codename: (Raccoon)
Released: 2022年03月31日
Codename: (Quasar)
Bump Spring Core dependency to 5.3.18 to remove vulnerability.
Released: 2022年02月11日
Codename: (Quasar)
Chained searches will handle common search parameters correctly when the Index Contained Resources configuration parameter is enabled.
Released: 2022年01月22日
Codename: (Quasar)
Improves the performance of the query for searching by chained search parameter when the Index Contained Resources feature is enabled.