Installation Quick Overview

Are You Considering ESP?

Download the software

Platform Requirements

Hardware

Backup

Software

Data Flow and Retention

Personnel Requirements

Timeline

Platform Setup Phase

Data Feed Setup Phase

Historical Data Backfill Phase

Code Mapping Phase

Validation Phase

Reporting Setup Phase

PopMedNet

Are You Considering ESP?

Are you considering installing ESP at your hospital or practice? If so, this page will walk through the resources you will need and the timeline to get ESP up and running.

We can provide support, give software demonstrations, discuss implementation options, or answer any other questions you may have! Contact us here.

Download the software

ESP is an open source software platform. The source code can be obtained here.

Platform Requirements

ESP runs inside the covered entity’s datacenter.

ESP runs inside your organization, generally on an independent server, either virtual or physical. The server is installed in your secure datacenter where all your other data is managed.Your staff manage all network access to the server and control the network through which any patient information leaves the ESP server – normally mandatory notifiable disease case reports sent over a secure channel to the Department of Public Health.

Hardware

ESP may be deployed on any modern server hardware platform that runs the Linux operating system. It provides an interactive web-based management interface, and timed batch tasks to read and process incoming data. We do not provide support for non-Linux (for example, MacOS or MS Windows) deployments. However, they should be relatively easy to implement since all of the infrastructure used in the ESP application is cross-platform open-source software such as Python and Django.

We offer the following server configuration guidelines:

CPU: Commodity (recent) Intel or AMD multi-core CPU. Absolute minimum 4 cores so the main processes can run without contention. With more cores, the web server and DB engine can be tuned to handle larger patient volumes.

Fast storage: Modern 6GB/7.2kRPM commodity SATA drives are fine for local hard disks. SAS or solid state storage will be faster if you can afford it but the gains will be marginal. RAID 10 disk array sized to store all encounter data accumulated over the reliable life of the server (e.g. 3 or 4 years) plus whatever data you can backfill is recommended. For 1 million patients, 10 years of data is likely to be less than 1TB, so the server described below would likely be adequate for a single server deployment. 

Adequate RAM: RDBMS index and other buffers are the major consumers of RAM. A minimal configuration for a very small installation might get by with 4GB but most sites will need 8GB or more to allow properly tuned RDBMS buffer allocation.

Example: for a million patients with projected historic and accumulated data to cover 10 years, a dedicated server with two quad core modern commodity CPU, 32GB of RAM, 3TB hot-swap RAID 10 disk with a hardware controller would be sufficient for most scenarios.

Processor, storage, and memory requirements will vary depending on the expected volume of medical records and daily transactions. Most of the load occurs in batch mode. The database (and of course the underlying disk I/O) throughput is stressed and likely to be the rate limiting resource. Larger amounts of system RAM allow the RDBMS backend to be configured to cache more data, improving throughput. A separate database server would be advised for very large installations. Virtual machines will have additional overheads, so may need additional resources compared to bare metal and will require careful attention to disk I/O for reasonable performance.

Backup

Most institutions will already have substantial tape or other backup infrastructure. We strongly recommend a Linux client for the ESP server so the data can be backed up safely and restored quickly in case of disaster recovery.

Software

ESP runs on Linux OS. It is developed on Ubuntu Server LTS systems, and has been run on many other Linux distributions including RedHat, SuSe, and CentOS.

A basic Linux server would need the following additional software:

  • Administrative logins from anywhere but the system console requires Open-SSH service.
  • Idaptables, a Linux computer firewall, should be installed and configured to manage and restrict system access according to policy.
  • Git is used for ESP distribution.
  • The ESP DataMart requires PostgreSQL as the RDBMS.
  • ESP software is developed using Python 2.7 and the Django ORM.  An ESP installation uses the Python virtual environment infrastructure.
  • An administrative web interface uses Apache web server.

Data Flow and Retention

ESP maintains details of all patients and their providers, including encounters, laboratory tests, pregnancies, prescriptions and diagnosis codes in order to search for notifiable disease cases. This data must be transferred from an existing EMR or EHR system into ESP’s RDBMS tables for the ESP software to be able to process them. Typically, data is transferred by an ‘Extract, Transform and Load’ (ETL) process using specially formatted text files which ESP reads. The ETL process does not need to be real-time as ESP case detection operates in timed batch mode, so a daily extract and load is usually sufficiently timely. We recommend that details of all clinical data are transferred –, storage is relatively inexpensive and if any data are missing, false negatives may arise in case detection.

We recommend that the server be sized so it has sufficient disk storage to maintain the volume of data expected over the 4 or so year operational life of the machine. It would be possible to purge old records over time, but doing this decreases the accuracy with which many important notifiable diseases can be reliably detected – for some conditions such as Hepatitis, the ‘first’ acute episode is of particular importance and without historical data on laboratory tests, ESP’s algorithm cannot be specific, so we strongly recommend data retention and even ‘backfilling’ of as much historical data as can be provided.


A number of other functions are available for ESP instances including syndromic surveillance, vaccine adverse event reporting and chronic disease detection and aggregate reporting. All of these extra functions will benefit from complete data retention.

Since the ESP data are secured by the responsible organization, security is as good as it is for the host EMR, so complete and permanent data retention is highly recommended, and server sizing should take this into account.

Personnel Requirements

ESP is a complex enterprise application, requiring a variety of staff roles for planning, deployment and ongoing maintenance. While each of these roles requires specific technical and other skills, these skills are widely available and staff with the necessary skills should be available in most major cities.

System administrator, medical liaison and the functional consultant are on-going roles required for quality control and maintenance of an ESP instance. However, the amount of effort required for these roles each week will decrease substantially after the initial deployment. Staff will need appropriate IRB clearance if this is a research project and if they will potentially have access to protected health information (PHI).

Project Sponsor

Installing and operating an ESP instance involves substantial internal institutional commitment in terms of effort and resources, and senior level collaboration with external agencies such as the local or state health department and other staff. Leadership, ‘ownership’, commitment and dedicated effort from a senior institutional staff member is essential to facilitate institutional ‘buy-in’ and to ensure appropriate teamwork and a successful implementation.

Project Manager

There are many tasks to be completed during implementation and operation. It is recommended that an experienced administrator take the project management role with dedicated time to ensure that all the necessary tasks are completed in an orderly and timely manner.

Local EMR technical specialists and IT staff

Dedicated time will be needed from technical staff with expertise in the local EMR system to prepare and deploy the ETL extraction and transfer to the ESP server, provide advice on coding systems and changes, and to ensure reliable production operation of the ETL and data transfer. Effort and involvement from local IT and security staff will also be required to facilitate the installation, operation, backup and security of the ESP server since it will contain identifiable patient data and will generally require network, power and space in the local data center and backup during routine operation.

System Administrator

The system administrator maintains the underlying operating system and hardware of the ESP server.

  • Linux/Unix system administration skills: 2 or more years of production server system administration recommended
  • Familiarity with the chosen RAID storage hardware
  • Competent with system, network and security issues, as required for hosting PHI

Server may be managed by existing system administration team, provided they have the necessary skills.

Functional Analyst

The functional analyst is responsible for mapping lab test codes from source EMR system to named heuristics used in Nodis case definitions, and developing templates for automated case reporting.

  • Experience with medical records, electronic medical records and coding ontologies in use in your institution : >= 2 years as a minimum
  • Ability to use web-based applications at an advanced level (code mapping and administration interfaces)
  • Experience with Open Source software development process, tools, etc would be desirable.
  • Experience with, or ability to quickly learn, basic source control software (Subversion) operation.
  • Relevant SQL query interface skills and experience highly desirable for data quality assurance and ad-hoc reporting

Implementation Consultant (IC)

The implementation consultant is technical lead for an ESP deployment. The IC is able to set up ESP instance from source, collaborating with system administrator as necessary for installation of dependency packages. IC can add new disease definitions, provided those definitions use existing HEF & Nodis constructs. IC contributes new code, bug fixes, and documentation to ESP open source project. Advanced IC may be able to add more complex new disease definitions, requiring additions to core HEF/Nodis code.

  • Two or more years of professional, productive object-oriented programming experience with Python or a similar language is highly recommended.
  • Experience with Open Source (OS) collaborative software development processes, tools, etc is highly recommended.
  • Experience with at least one modern web application framework is required
  • Familiarity with Git source code control system is required
  • Comfortable working in a Linux/Unix shell environment. Linux/Unix system administration experience is highly desirable.
  • Proficiency writing ad-hoc SQL queries is required.
  • Extract, Transform and Load (ETL) experience is a must; proficiency with an advanced ETL tool such as Pentaho, Talend, Informatica, etc. is valuable
  • PostgreSQL experience is highly recommended but other RDBMS experience is transferrable. Administration and backend programming experience is a highly desirable.

Many of our partner practices use an informatics vendor familiar with ESP to provide these services.

Medical Liaison

The medical liaison advises the functional consultant and drives all medical and epidemiological aspects of quality control for all case finding and validation. The medical specialist’s specific expertise is needed in mapping EMR source system ‘native’ laboratory and prescription codes to decision rules (‘heuristics’) used by ESP. This person will also need to work closely with the implementation consultant to add new disease definitions to ESP if those are desired.

  • Physician, usually on staff of sponsoring organization.
  • Basic ability to use web based application (case management interface)
  • Dedicated time for quality assurance and case management during initial testing and for duration of production operation.

Timeline

The time required to deploy ESP depends on a variety of factors, and will vary by institution. However, a typical deployment can be divided into several phases for planning purposes. Note that, in practice, some phases can and should be expected to overlap – for example, the initial few months of daily production operation should also be a period of intensive quality assurance.

Platform Setup Phase

In this phase, a fresh ESP instance is installed and made ready for operation. The system administrator prepares the server to host ESP, working with institution’s networking staff for security and remote access arrangements. The implementation consultant then installs the ESP software, and configures its access to the database.
Once a server is ordered, it may take a few weeks to be delivered and physically installed in the institutional data center. Once networking, power, operating system installation and configuration and other local issues are resolved, the setup phase can be completed in under a week. Note, however, some platform choices (server OS, database backend, web server, vpn arrangements etc.) may require significantly more effort than others. Backup and disaster recovery strategies can be deployed at this time.

Data Feed Setup Phase

Once an ESP instance has been readied for operation, it must be provided with data from the institution’s electronic medical record (EMR) or electronic health record (EHR) system. Functional analyst and implementation consultant work in close coordination with technical staff who maintain institution’s EMR system. Establishment of the data feed can be a fairly complex ETL task. 

The goals of this phase are:

  • Secure transfer of identifiable patient data to ESP server
  • Mapping of coded fields (e.g. diagnostic and laboratory test codes) in source data to the standard codes used in ESP’s case finding algorithms.
  • Loading of source data into ESP, including any necessary transformation
  • Archiving source data files after they are loaded
  • Automatic scheduled operation of all the above

Depending on complexity of the ETL process required, this may be the most time consuming phase. It can be considerably easier if current lists of all relevant local codes for the laboratory tests and prescriptions needed for case finding can be prepared for loading through the mapping interface. Scripts for extraction from Epic, Clarity (SQL), or Cache (MUMPS) are available, but would have to be carefully reviewed when implemented at a new site.  This can significantly reduce ETL development time.

Historical Data Backfill Phase

We recommend back populating your ESP instance with at least two years of historical EMR data. This is useful to provide a facilitate differentiating between acute and chronic infections (e.g. hepatitis B, hepatitis C) and to establish a sufficiently large set of cases for the Validation phase of installation (below). Typically historical backfill can be accomplished by tweaking, or manually operating, the ETL procedure established above, to provide historical instead of current data. This phase should be quick, but will depend on the amount of historical data loaded and upon technical staff who maintain institution’s EMR system.


Once historical data is available, ad-hoc SQL queries and some ESP utilities can be used to generate lists of native codes that are likely to require mapping – e.g. finding all native laboratory codes containing text matching ‘chlam*’ can be helpful in finding all tests for Chlamydia trachomatis.

Code Mapping Phase

Codes identifying lab tests must be mapped to the heuristics used by ESP’s disease analysis logic. Functional analyst will work in conjunction with medical liaison, institution’s EMR system technical staff, and institution’s lab managers. ESP currently provides tools to facilitate the mapping task, and enhanced tools are under active development.

This phase can be fairly simple, or can be quite complex, depending upon characteristics of the source EMR records.

Validation Phase

The first set of cases detected by ESP must be validated to ensure the system has been configured correctly. Providers will want to compare cases detected by ESP against a list of known valid cases, typically collected by existing manual reporting methods (e.g. records of notifiable diseases reported to the health department, registries for diabetes and other chronic conditions). ESP cases are cross matched to the external list of cases and all discrepant cases analyzed.  For cases detected by ESP but absent from the comparison list, the goal is to determine if they are true cases or not.  If not, whether the case was falsely detected by ESP due to a mapping error, algorithm error, or because of insufficient data in the historical data feed.  For cases missed by ESP, the goal is to determine why they were missed.  Possible reasons include incomplete ETL to ESP, incomplete mapping, or because the data supporting case determination does not reside as structured data in the EMR (for example if a case was diagnosed outside of the partner practice). 


Some diseases, for instance Hepatitis A/B/C, require much more labor to examine manually than others, such as Chlamydia. Thus the effort required for validation will vary in part with the mix of cases seen by the institution.  For conditions with very large case numbers (e.g. diabetes, hypertension, obesity) validation of discrepant cases can be done using sampling rather than mandatory review of all cases.

Reporting Setup Phase

In this phase, ESP is configured for automated reporting of cases. ESP provides a template-based case reporting tool capable of generating HL7, simple XML, and other text-based output files. Functional analyst will develop output template, and work with implementation consultant and system administrator to establish automatic, secure transfer of case notifications to the local public health agency. Acceptance testing and ongoing quality control will require close ongoing collaboration between ESP project staff and the local public health staff.

Setting Up The PopMedNet System

Note: PopMedNet is not required for notifiable disease case reporting.

What are the browser requirements for the PopMedNet query tool?

The PopMedNet Portal is designed to work with Internet Explorer (IE) 7 or later. Earlier versions of IE may not display the user interface properly. Although IE7 is the only officially supported browser, other browsers such as Firefox and Chrome may also work; Firefox has been used extensively in testing.

How is a network established?

Any group of institutions can choose to create a network. Most networks develop an organizational structure to address network governance and operations. A network coordinating center that includes the Network Administrator (a role in the network) is often implemented to handle day-to-day operation. Once the system is established and hosted, the Network Administrator can set-up the network based on the governance rules that specify which organizations should be included, which users get login credentials, and the roles for all users.

Once a network is established by the Network Administrator, how long does it take for partners to join and participate?

Participation requires partners to 1) install the Data Mart Client, 2) establish settings and permissions on the network portal, 3) establish settings within the local Data Mart Client, and 4) creation of the necessary ODBC connections. Establishing user settings takes about 30 to 60 minutes.

Can governance rules be incorporated on a network-by-network basis?

Yes. The software allows establishment of governance rules related to role-based access control, permissions, and query features. Rules can include who gets which roles, how long data remains on the portal before deletion, who can query who, and what query types are available. Governance rules are the joint responsibility of the Network Administrator and the individual partners who must give authorized users permission to send them queries. Refer to the PopMedNet™ Overview and Technical document for a complete list and more details.

Does the system have any notification capabilities?

Yes. The software includes extensive and flexible notification options for users. Notifications are based on changes in status within the system. Users can choose to receive notifications for a range of activities, including when query result are available, when query status changes, when queries have been sent to your Data Mart for execution, when users are added or removed, etc. Query reminders are also available.