iDAAS Connect HL7. iDAAS-Connect-HL7 will support the following HL7 messages (ADT, ORM, ORU, MFN, MDM, PHA, SCH and VXU) from any vendor and any version of HL7 v2. Additionally, there is support for CCDA as well. Additionally, we have supported automated conversions from HL7 and CCDA to their FHIR equivalent, just by changing a setting.
For all iDaaS design patterns it should be assumed that you will either install as part of this effort, or have the following:
- An existing Kafka (or some flavor of it) up and running. Red Hat currently implements AMQ-Streams based on Apache Kafka; however, we
have implemented iDaaS with numerous Kafka implementations. Please see the following files we have included to try and help:
Kafka
KafkaWindows
No matter the platform chosen it is important to know that the Kafka out of the box implementation might require some changes depending upon your implementation needs. Here are a few we have made to ensure:
In /config/consumer.properties file we will be enhancing the property of auto.offset.reset to earliest. This is intended to enable any new system entering the group to read ALL the messages from the start.
auto.offset.reset=earliest - Some understanding of building, deploying Java artifacts and the commands associated. If using Maven commands then Maven would need to be intalled and runing for the environment you are using. More details about Maven can be found here
- An internet connection with active internet connectivity, this is to ensure that if any Maven commands are
run and any libraries need to be pulled down they can.
This section is intended to cover any scenarios covered within this demo.
This repository follows a very common general clinical care implementation pattern. The implementation pattern involves one system sending data to another system via the HL7/MLLP message standard.
Identifier | Description |
---|---|
Healthcare Facility | MCTN |
Sending EMR/EHR | MMS |
HL7 Message Events | ADT (Admissions, Discharge and Transfers),ORM (Orders),ORU (Results) |
It is important to know that for every HL7 Message Type/Event there is a specifically defined, and dedicated, HL7 socket server endpoint.
Here is a general visual intended to show the general data flow and how the accelerator design pattern is intended to work.
- Any external connecting system will use an HL7 client (external to this application) will connect to the specifically defined HL7 Server socket (one socket per datatype) and typically stay connected.
- The HL7 client will send a single HL7 based transaction to the HL7 server.
- iDAAS Connect HL7 will do the following actions:
a. Receive the HL7 message. Internally, it will audit the data it received to a specifically defined topic.
b. The HL7 message will then be processed to a specifically defined topic for this implementation. There is a specific topic pattern - for the facility and application each data type has a specific topic define for it. For example: Admissions: MCTN_MMS_ADT, Orders: MCTN_MMS_ORM, Results: MCTN_MMS_ORU, etc..
c. An acknowledgement will then be sent back to the hl7 client (this tells the client he can send the next message, if the client does not get this in a timely manner it will resend the same message again until he receives an ACK).
d. The acknowledgement is also sent to the auditing topic location.
This section covers the running of the solution. There are several options to start the Engine Up!!!
In order for ANY processing to occur you must have a Kafka server running that this accelerator is configured to connect to.
Please see the following files we have included to try and help:
Kafka
KafkaWindows
This section covers how to get the application started.
- Maven: go to the directory of where you have this code. Specifically, you want to be at the same level as the POM.xml file and execute the
following command:
mvn clean install
You can run the individual efforts with a specific command, it is always recommended you run the mvn clean install first.
Here is the command to run the design pattern from the command line:
mvn spring-boot:run
Depending upon if you have every run this code before and what libraries you have already in your local Maven instance it could take a few minutes.
- Code Editor: You can right click on the Application.java in the /src/ and select Run
If you don't run the code from an editor or from the maven commands above. You can compile the code through the maven
commands above to build a jar file. Then, go to the /target directory and run the following command:
java -jar <jarfile>.jar
All iDaaS Design Pattern/Accelelrators have application.properties files to enable some level of reusability of code and simplfying configurational enhancements.
In order to run multiple iDaaS integration applications we had to ensure the internal http ports that
the application uses. In order to do this we MUST set the server.port property otherwise it defaults to port 8080 and ANY additional
components will fail to start. iDaaS Connect HL7 uses 9980. You can change this, but you will have to ensure other applications are not
using the port you specify.
Alternatively, if you have a running instance of Kafka, you can start a solution with:
./platform-scripts/start-solution-with-kafka-brokers.sh --idaas.kafkaBrokers=host1:port1,host2:port2
.
The script will startup iDAAS server.
It is possible to overwrite configuration by:
- Providing parameters via command line e.g.
./start-solution.sh --idaas.adtPort=10009
- Creating an application.properties next to the idaas-connect-hl7.jar in the target directory
- Creating a properties file in a custom location
java -jar <jarfile.jar> --spring.config.location=file:./config/application.properties
Supported properties include (for this accelerator there is a block per message type that follows the same pattern):
# Admin Interface Settings
management.endpoints.web.exposure.include=hawtio, jolokia,info, health, prometheus
hawtio.authenticationEnabled=false
management.endpoint.hawtio.enabled=true
management.endpoint.jolokia.enabled=true
# urls
# http://localhost:9980/actuator/jolokia/read/org.apache.camel:context=*,type=routes,name=*
# http://localhost:9980/actuator/hawtio/index.html
# Used for internal HTTP server managing application
# Must be unique and defined otherwise defaults to 8080
# used for any Fuse SpringBoot developed assets
server.port=9980
# Kafka Configuration - use comma if multiple kafka servers are needed
idaas.kafkaBrokers=localhost:9092
idaas.integrationTopic=kic_dataintgrtntransactions
idaas.appintegrationTopic=kic_appintgrtntransactions
idaas.terminologyTopic=idaas_terminologies
# One of the set per HL7 datatype
idaas.hl7ADT_Directory=data/adt
idaas.adtPort=10001
idaas.adtACKResponse=true
idaas.adtTopicName=mctn_mms_adt
...
# CCDA
idaas.hl7ccda_Directory=data/ccda
idaas.ccdaTopicName=mctn_mms_ccda
# Other Settings
idaas.convertCCDAtoFHIR=false
idaas.convertHL7toFHIR=false
idaas.processTerminologies=false
idaas.deidentify=false
idaas.anonymize=false
Within each specific repository there is an administrative user interface that allows for monitoring and insight into the connectivity of any endpoint. Additionally, there is also the implementation to enable implementations to build there own by exposing the metadata. The data is exposed and can be used in numerous very common tools like Data Dog, Prometheus and so forth. This capability to enable would require a few additional properties to be set.
Below is a generic visual of how this looks (the visual below is specific to iDaaS Connect HL7):
Every asset has its own defined specific port, we have done this to ensure multiple solutions can be run simultaneously.
For all the URL links we have made them localhost based, simply change them to the server the solution is running on.
iDaaS Connect Asset | Port | Admin URL / JMX URL |
---|---|---|
iDaaS Connect HL7 | 9980 | http://localhost:9980/actuator/hawtio/index.html / http://localhost:9980/actuator/jolokia/read/org.apache.camel:context=*,type=routes,name=* |
If you would like to contribute feel free to, contributions are always welcome!!!!
Happy using and coding....