OpenMRS SoC - 10th week update



Hello Everyone,

This week, I have added a new feature to update status of the selected narrative. So the reviewers will able to easily change the statuses of the narrative they are working on.

And discussion panel (comment system) for care provider's to discuss things about the patient's sickness for patient narratives retrieved by the system.

Meanwhile, I have been working to display the saved patient narrative video (complex observation) back in the care provider console for the care providers review. Unfortunately this feature isn't completed yet. There are some weird issues like Broken pipe which i will need to resolve soon.
 







OpenMRS SoC - 9th week update



Hello everyone,

Its almost the end of a ninth week of GSoC 2013. yeap! more than 2 months has been passed already. In this week i have worked for the Patient Narratives video stream capturing feature. It wasn't that much easy task. The WebRTC mediastream API isn't that much of mature tool yet. (Its actually an ongoing project by Mozilla and Chrome team) But this the most suitable option to use in my project -- since its a web based, Open source tool.

Basically WebRTC API will allow developers to create apps (which will capture web-cam video/audio recording, Streaming and things like that) simply using the power of HTML5 and JavaScript. The main advantage of using WebRTC technology is, the user will not have trouble to install Video capture softwares/drivers since it uses the Web as the platform. So everything will be take cared by the web browser. Currently latest versions of Chrome and Firefox browsers support the WebRTC technology.

Here's a quick demo about how WebRTC works : http://harshadura.github.io/record-rtc-together/



Well, the media-stream API is real cool to play with but still it lacks an essential feature specially for my project. Thats media-stream api still doesn't have the capability to return the recorded media as a single file (which contains both audio and video together). What it currently does is generate audio and video as separate blobs (files). Apparently this issue open the door for a real big mess to merge the two files later time.

So I was searching for a solution to get rid of this issue. And I found there's a popular library called FFMPEG which will allow to merge two video+audio files together. Then i tried to find a Java wrapper for it. And finally I found the best tool for the work called Xuggle which will internally process FFMPEG stuff and behave as a Java API.

So I used this Xuggle API to successfully merge the two files i generate with the WebRTC. The merging will happens on server side. after passing the video and audio. (Still there's no obvious way to handle it on client side)

So now everything works as intended! :-)




OpenMRS SoC - 8th week update


Hello Everyone,

I have completed the allow logged in care-provider user to review uploaded information, register valid patient by creating a new patient and automatically transfer the appropriate encounter to the newly created patient functionality.




Currently I am working on the WebRTC implementation to capture the video/audio media stream from the care seekers.

Thank you,
Harsha

OpenMRS SoC - 7th week update


Hi All,

I had my university mid semester exams this week, So there's nothing much progress this week, I am sure will get back to you with a good update in next week!

-Harsha