Hospital information system book


 

HELP: A Dynamic Hospital Information System. Authors: download this book. eBook 91,62 € Medical information systems sit in the middle of all these demands. An Introduction to Hospital Information Systems care at the lowest possible cost , this book provides the essential resources needed by the medical informatics. Editors: Weaver, C.A., Kiel, J.M., Ball, M.J., Douglas, J.V., O'Desky, R.I., Albright, J.W. Addressed to practitioners of healthcare administration, the book looks beyond traditional information systems. This text suggests how information systems can bring a competitive advantage.

Author:ETSUKO SUMRALL
Language:English, Spanish, Arabic
Country:Niger
Genre:Art
Pages:217
Published (Last):22.06.2016
ISBN:450-1-40250-395-7
Distribution:Free* [*Register to download]
Uploaded by: WENONA

65270 downloads 140679 Views 25.82MB ePub Size Report


Hospital Information System Book

The book 'New Technologies in Hospital Information Systems' is launched by the European Telematics Applications Project HANSA (Healthcare Advanced. In producing this new book, the Editors have broadened the focus from hospital information systems and their strategic management to strategic management of . Hospital. information systems are just an instance of health information. systems, in which a hospital is the healthcare environment as.

Methods for evaluating hospital information systems: a literature review Methods for evaluating hospital information systems: a literature review Author s : Vassilios P. Show all authors Purpose — It is widely accepted that the use of information and communication technology ICT in the healthcare sector offers great potential for improving the quality of services provided, the efficiency and effectiveness of personnel, and also reducing organizational expenses. This paper seeks to examine various hospital information system HIS evaluation methods. Three approaches for evaluating hospital information systems are presented — user satisfaction, usage, and economic evaluation. Findings — The main results are that during the past decade, computers and information systems, as well as their resultant products, have pervaded hospitals worldwide. Unfortunately, methodologies to measure the various impacts of these systems have not evolved at the same pace. To summarize, measurement of users' satisfaction with information systems may be the most effective evaluation method in comparison with the rest of the methods presented. Practical implications — The methodologies, taxonomies and concepts presented in this paper could benefit researchers and practitioners in the evaluation of HISs.

Storm offers real-time computation for implementing Big Data stream processing on the basis of Hadoop. Different from the above two processing platforms, Storm itself does not have the function of collecting and saving data; it uses the Internet to receive and process stream data online directly and post back analysis results directly through the network online.

Up to now, Hadoop, Spark, and Storm are the most popular and significant distributed cloud computing technologies in Big Data field. All the three systems have their own advantage for processing different types of Big Data; both Hadoop and Spark are off-line, but Hadoop is more complex, while Spark owns higher processing speed. Storm is online and available for real-time tasks.

In medical industry, the data are more and have different application scenarios. We can build specific medical Big Data processing platform and develop and deploy related Big Data applications according to characters of the three different platforms while processing different types of medical Big Data with different demands. A complete data processing workflow includes data acquisition, storage and management, analysis, and application.

The technologies of each data processing step are as follows:. Big Data acquisition, as the basic step of Big Data process, aims to collect a large amount of data both in size and type by a variety of ways.

To confirm data timeliness and reliability, implementing distributed platform-based high-speed and high-reliable data fetching or acquisition extract collection technologies are required to realize the high-speed data integration technology for data parsing, transforming and loading.

In addition, data security technology is developed to ensure data consistency and security. Big Data storage and management technology need to solve both physical and logical level issues.

At the physical level, it is necessary to build reliable distributed file system, such as the HDFS, to provide highly available, fault-tolerant, configurable, efficient, and low-cost Big Data storage technology.

At the logical level, it is essential to develop Big Data modelling technology to provide distributed non-relational data management and processing ability and heterogeneous data integration and organization ability. Big Data analysis, as the core of the Big Data processing part, aims to mine the values hidden in the data. Big Data analysis follows three principles, namely processing all the data, not the random data; focusing on the mixture, not the accuracy; getting the association relationship, not the causal relationship.

These principles are different from traditional data processing in data analysis requirements, direction, and technical requirements.

With huge amounts of data, simply relying on a single server computing capacity does not satisfy the timeliness requirement of Big Data processing parallel processing technology. For example, MapReduce can improve the data processing speed as well as make the system facilitate high extensibility and high availability. Big Data analysis result interpretation and presentation to users are the ultimate goal of data processing.

The traditional way of data visualization, such as bar chart, histogram, scatter plot, etc. Therefore, Big Data visualization technology, such as three-dimensional scatter plot, network, stream-graph, and multi-dimensional heat map, has been introduced to this field for more powerfully and visually explaining the Big Data analysis results. According to the national institute of standards and technology NIST , cloud computing is a model for enabling ubiquitous, convenient, and on-demand network access to a shared pool of configurable computing resources e.

Cloud computing has five essential characteristics [ 8 ]:. On-demand service: Users do not need human interaction service provider, such as a server to automatically obtain time, network storage, and other computing resources according to their needs. Broad network access: Users can end on any heterogeneous access to resources through the network according to standard mechanisms, such as smart phones, tablet PCs, notebooks, workstations, and thin terminals.

Pooling resource: Different physical and virtual resources are in possession for a plurality of service users. Based on this, high level of abstraction concept, even if the user has no concept of actual physical resources or control, can also be obtained as usual computing services. Rapid elasticity: All computing resources can quickly and flexibly configure publishing, to provide users with an unlimited supply capacity.

For users, they can ask for computing resources acquired automatically increase or decrease with distribution according to their needs. Managed services: Cloud computing providers need to realize the measurement and control of resources and services in order to achieve the optimal allocation of resources. According to different resource categories, the cloud services are divided into three service models, i.

It is a new software application and delivery model. Mode applications running on a cloud infrastructure that it will be application software and services delivered over the network to the user. Applications can access through a variety of end, and the user does not manage or control the underlying software required to run their own cloud infrastructure and software maintenance.

It is a kind of brand new software hosting service mode, users can interface with providers and own applications hosted on the cloud infrastructure. It is a new infrastructure outsourcing mode, the user can obtain basic computing resources CPU, memory, network, etc.

For users, it can be deployed on the service, operation, and control of the operating system and associated application software without the need to care or realize the underlying cloud infrastructure. To meet the different needs of users, according to the cloud infrastructure deployment pattern difference, there are basically four deployment models, namely private cloud, public cloud, community cloud, and hybrid cloud, under different requirements for the deployment of the cloud computing infrastructure.

Private cloud: Cloud platform is designed specifically for a particular unit of service and provides the most direct and effective control of data security and quality of service. In this mode, the unit needs to invest, construct, manage, and maintain the entire cloud infrastructure, platform, and software and owns risk.

Public cloud: Cloud service providers provide free or low-cost computing, storage, and application services. The core attributes are to a shared resource service via the Internet such as Baidu cloud and site Web Service. Community cloud: Multiple units share using the same cloud infrastructure for they have common goals or needs.

Interest, costs, and risks are assumed jointly. Hybrid cloud: The cloud infrastructure is a composition of two or more distinct cloud infrastructures private, community, or public. Cloud computing is an emerging computing model, and its development depends on its own unique technology with a series of other traditional technique supports:.

Rapid deployment.

Healthcare Information Management Systems

Since the birth of data centre, rapid deployment is an important functional requirement. Data centre administrators and users have been in the pursuit of faster, more efficient, and more flexible deployment scheme. Cloud computing environment for rapid deployment requirements is even higher. First of all, in cloud environment, resources and application not only change in large range but also in high dynamics. The required services for users mainly adopt the on-demand deployment method.

Secondly, different levels of cloud computing environment service deployment pattern are different. In addition, the deployment process supported by various forms of software system and the system structure is different; therefore the deployment tool should be able to adapt to the change of the object being deployed.

Resource dispatching: In certain circumstances, according to certain regulations regarding the use of the resources, resource dispatching can adjust resources between different resource users.

These resource users correspond to different computing tasks, and each computing tasks in the operating system corresponds to one or more processes. The emergence of virtual machine makes all computing tasks encapsulated within a virtual machine. The core technology of the virtual machine is the hypervisor. It builds an abstraction layer between the virtual machine and the underlying hardware; operating system calls to hardware interception down and provides the operating system virtual resources such as memory and CPU.

Due to the isolation of virtual machine, it is feasible to use the virtual machine live migration technology to complete the migration of computing tasks. Massive data processing: With a platform of Internet, cloud computing will be more widely involved in large-scale data processing tasks.

Due to the frequent operations of massive data processing, many researchers are working in support of mass data processing programming model.

The world's most popular mass data processing programming model is MapReduce designed by Google. MapReduce programming model divides a task into many more granular subtasks, and these subtasks can schedule between free processing nodes making acute nodes process more tasks, which avoids slow processing speed of nodes to extend the task completion time.

Massive message communication: A core concept of cloud computing is the resources, and software functions are released in the form of services, and it is often needed to communicate the message collaboration between different services. Therefore, reliable, safe, and high-performance communication infrastructure is vital for the success of cloud computing.

Asynchronous message communication mechanism can make the internal components in each level of cloud computing and different layers decoupling and ensure high availability of cloud computing services. At present, the cloud computing environment of large-scale data communication technology is still in the stage of development. Massive distributed storage: Distributed storage needs storage resources to be abstract representations and unified management and be able to guarantee the safety of data read and write operations, the reliability, performance, etc.

Distributed file system allows the user to access the remote server's file system like a visit to a local file system, and users can take the data stored in multiple remote servers.

HELP: A Dynamic Hospital Information System | Gilad J. Kuperman | Springer

Mostly, distributed file system has redundant backup mechanism and the fault-tolerant mechanism to ensure the correctness of the data reading and writing. Based on distributed file system and according to the characteristics of cloud storage, cloud storage service makes the corresponding configuration and improvement. With the continuous development of medical industry, expanding the scale of medical data and the increasing value, the concept of medical Big Data has become the target of many experts and scholars.

In the face of the sheer scale of medical Big Data, the traditional storage architecture cannot meet the needs, and the emergence of cloud computing provides a perfect solution for the medical treatment of large data storage and call. According to different functions, medical cloud platform is divided into five parts: Every part can form an independent child cloud. Data mining layer and application layer share using data storage layer. Medical cloud deployment is shown in Figure 1.

The figure also illustrates the medical cloud data flow direction. Data acquisition layer: The storage format of medical large data is diverse, including the structured and unstructured or semi-structured data.

Strategic Information Management in Hospitals

So data acquisition layer needs to collect data in a variety of formats. Also, medical cloud platform and various medical systems are needed for docking and reading data from the corresponding interface. Due to the current social software and network rapid development, combining medical and social networking is the trend of the future.

So it is essential to collect these data. Finally, data acquisition layer will adopt sets of different formats of data processing, in order to focus on storage.

Data storage layer: The data storage layer stores all data of the medical cloud platform resources. Cloud storage layer data will adopt platform model for architecture and merge the data collected from data acquisition layer and block for storage. Data mining layer: Data mining is the most important part of medical cloud platform which complete the data mining and analysis work through the computer cluster architecture.

Using the corresponding data mining algorithms, data mining layer finds knowledge from the data in data storage layer and enterprise database and store the result in data storage layer.

Data mining layer can also affect application layer using its digging rules and knowledge via methods of visualization. Enterprise database. Medical institutions require not only convenient, large capacity of cloud storage but also high real-time and high confidentiality to local storage of data. These would require the enterprise database. Enterprise database needs interaction with data cloud storage layer and the data mining layer in data, and it will give the data to the application layer for display.

Application layer: The application layer is mainly geared to the needs of users and displays data either original or derived through data mining. The outcome of each phase determines which phase has to be performed next. There are a few known attempts to provide a specialized DM methodology or process model for applications in the medical domain.

However, the authors do not cover some important aspects of practical DM application, such as data understanding, data preparation, mining non-structured data, and deployment of the modelling results. Catley et al. The results of the work will benefit the researchers of ICU temporal data but not directly applicable for other medical data types or DM application goals. Olegas Niaksu et al. There are five approaches for data mining tasks: Classification refers to supervised methods that determine target class value of unseen data.

The process of classification is shown in Figure 3. In classification, the data are divided into training and test sets used for learning and validation, respectively. We have described most popular algorithms in medical data mining in Table 1. XML schema is a description of a type of XML document, typically expressed in terms of constraints on the structure and content of documents of that type, above and beyond the basic syntactical constraints imposed by XML itself.

Standardized Semantic Web technologies: Middle layers contain technologies standardized by W3C to enable building Semantic Web applications. A collection of RDF statements intrinsically represents a labelled, directed multi-graph. As such, an RDF-based data model is more suited for lightweight, flexible, and efficient knowledge representation than relational models. Ontology is at the core of the Semantic Web Stack.

By formally defining terms, relations, and constraints of commonly agreed concepts in a particular domain, ontology facilitates knowledge sharing and reuse in a declarative and computational formalism. Combined with rules and query languages, the static knowledge in the ontology can be dynamically utilized for semantic interoperation between systems. Logic consists of rules that enable advanced ontology-based inferences.

These rules extended the expressivity of ontology with formal rule representation languages. Encryption is used to verify the reliability of data sources supporting the Semantic Web, typically using digital signature of RDF statements.

Proof has been conceived to allow the explanation of given answers generated by automated agents. This will require the translation of Semantic Web reasoning mechanisms into some unifying proof representation language. Trust is supported by verifying that the premises come from trusted source and by relying on formal logic during deriving new information.

OWL is built upon the description logic DL , which is a family of formal knowledge representation languages used in artificial intelligence to describe and reason about the relevant concepts of an application domain. Major constructs of OWL include individuals, classes, properties, and operations. Each of these sublanguages is a syntactic extension of its simpler predecessor.

They are designed for use by different communities of implementers and users with varying requirements for knowledge representation. Variables are prefixed with a question mark e. A complete specification of SWRL built-in atoms can be found in [ 18 ]. Apache Jena or Jena in short is a free and open-source Java framework for building Semantic Web and linked data applications [ 19 ]. Providing various APIs for the development of inference engines and storage models, Jena is widely used in the development of systems or tools related with Web ontology management.

ARQ supports remote-federated queries and free text search. TDB: It has a native high performance triple store and can be used to persist data.

Ontology API. Inference API: It can be used to reason over the data to expand and check the content of the triple store. The interaction between the different APIs is shown in Figure 6. Figure 6. Interaction between the different APIs of Jena. The applications of Semantic technology in the analysis of medical Big Data The volume, velocity, and variety of medical data, which is being generated exponentially from biomedical research and electronic patient records, require special techniques and technologies [ 20 ].

Semantic Web technologies are meant to deal with these issues. The Semantic Web is a collaborative movement, which promoted standard for the annotation and integration of data. Its aim is to convert the current web, dominated by unstructured and semi-structured documents, into a web of data, by encouraging the inclusion of semantic content in data accessible through the Internet. The development of ontology on the basis of Semantic Web standards can be seen as a promising approach for a semantic-based integration of medical information.

Many resources have ontology support, due to its consistency and expressivity. The following diagram in Figure 7 is an example showing the application of ontology in the big picture of Big Data analysis [ 20 ]. Figure 7. Ontology and rules in the big picture of Big Data analysis.

The picture includes three layers: the data layer, knowledge layer, and the application layer. The data layer consists of a wide variety of heterogeneous and complex data including structured, semi-structured, and unstructured. In the knowledge layer, ontology can be used to access Big Data, which can be processed and analysed by the ontology, rules, and reasoners to derive inferences and obtain new knowledge from it.

Then in the application layer, there are several applications that can use the new knowledge such as decision support, semantic service discovery, and data integration. Medical cloud platform construction for medical Big Data processing The medical cloud platform for Big Data processing is mainly divided into three levels wherein the first level achieves a hospital private cloud, which serves as the basis of the three-tier application model.

The second level achieves the medical community cloud, which is an upgrade based on the first level and achieves a medical cloud service. The third level achieves the applications of medical Big Data.

It builds a medical Big Data processing system based on a distributed computing platform named Hadoop. Methods Inf Med. Shortliffe EH. Assessing the prognoses on Health care in the information society — thirteen years after. J Med Syst. Reichertz PL. Hospital information systems — past, present, future. First published in: Int J Med Inform. From hospital information systems to health information systems.

Problems, challenges, perspectives.

Haux R. Health information systems — past, present, future. Int J Med Inform. Toward an information infrastructure for global health improvement. Yearb Med Inform. A complementary approach for understanding health care systems and population health Methods Inf Med Open :e13—e

Similar files:


Copyright © 2019 ruthenpress.info. All rights reserved.
DMCA |Contact Us