About

The eLog library was initially developed as a research prototype and later published for lifelogging researchers in 2010 to help them easily analyze heterogenous data and complex visualization. It has been kept growing with the progress of mobile computing environments and recently its UI part is released with GPL v3 license for wider usage. The eLog UI library is optimized for mobile environment and can get easily integrated with existing Web services.

Who We Are

The original work was proposed by Pil Ho and later extended the work with collaboration with 28 researchers around the world who contributed their lifelogs, collaborated for lifelog analysis and share research results to build up an open lifelogging platform for the public. Pil Ho has been keeping the development updating the library following up the progress in mobile computing.

Updates

  • Nov. 2014: Change the web page skin using bootstrap.
  • Nov. 2014: Published elog UI library as GPL v3.
  • Oct. 2014: Version up eLog library and documentation.

 

Summarization of lifelog data

Problem description
Lifelog data is a huge amout of heterogeneous information that describe the life of a person. The main problem in using these data is how to extract useful information and show them to the user in a fast and easy-to-use manner.
The summarization of lifelog data can be useful in many context:

  • Helps mental diseased people to recover lost memories;
  • Can retrieve digital information acquired over the years;
  • Can retrieve our personal information about physical activity or, in general, past experience.

Problem and solution specification
We want to address the problem of summarising frequent event that happens during a user–specified period of time. The key idea that drives our solution is to proceed with the summarisation by steps, as it follows:

  • GPS Points: first of all we need to extract all the paths that the subject has done in the given period of time. With this data we can recognize the most frequent path.
  • People: from the paths it is possible to extract the people that the subject have encounter more times, in order to get the frequent person seen.
  • Images: with the new information we can now get the most interesting pictures that can describe the place and the people seen in this period of time. The photos will be selected from the ones captured on the starting and arrival place and we use some criteria to choose which one is better.

Plan of work
Our work can be divided in four ordered phases, like follows:

  1. Path identification
    We try to get the paths by grouping the single GPS points. The idea is to start with the first point and then add one point at a time (ordered by timestamp) until the object is on the arrival area.
    • Used criteria: we assume that the starting and ending point of a path is when the user stand still (or in a small area) in a place for at least 10–20 min (this is done by analysing the distances between the points of the last 20 min).
  2. Clustering area and retrieve path of interest
    At this step we cluster similar area or path together and then we can retrieve the most frequent tracks. To do this we use the frequent itemset search algorithm on location of starting and ending points. From this set we will select the two most frequent paths.
  3. People of interest
    We can connect all the images to the people that appear in them (there can be some anonymous person) and find the people that the subject have seen more times during the path.
  4. Images to remember
    The people seen by the subject can help us to restrict the number of interesting photos to publish in the summary. We believe that it’s easier to remember events from the people that have taken part in it. Another criteria is to use the SenseCam’s data like light level and accelerometer. Because this take place in different days, we use an algorithm that can split events in order to avoid to get too many images from the same event.
  5. Present the summarised data
    Provide the summarisation to the user in a fashion way. We will provide the possibility to get the wanted number of solutions.

How to evaluate our idea
The best method to evaluate if our system work is to get some summarisation from different periods of time and ask Pil Ho Kim and other people that take part in the life–log project to try to remember what they have done. Another way to evaluate is to compare some experimental solutions with the summarisation system made by our colleagues, for example giving a measure of quality of the images.
We will add some other measure as we improve the project and we get information about summarising algorithm from the other groups.

References

  1. A. R. Doherty, C. Ó Conaire, M. Blighe, A. F. Smeaton, and N. E. O’Connor. Combining image descriptors to effectively retrieve events from visual lifelogs. In Proceeding of the 1st ACM international conference on Multimedia information retrieval, MIR ’08, pages 10–17, New York, NY, USA, 2008. ACM.
  2. A. R. Doherty and A. F. Smeaton. Automatically segmenting lifelog data into events. In Proceedings of the 2008 Ninth International Workshop on Image Analysis for Multimedia Interactive Services, pages 20–23, Washington, DC, USA, 2008. IEEE Computer Society.
  3. A. Fitzgibbon and E. Reiter. “Memories for life” Managing information over a human lifetime. 2003.
  4. A. J. Sellen, A. Fogg, M. Aitken, S. Hodges, C. Rother, and K. Wood. Do life-logging technologies support memory for the past?: an experimental study using sensecam. In Proceedings of the SIGCHI conference on Human factors in computing systems, CHI ’07, pages 81–90, New York, NY, USA, 2007. ACM.
  5. A. J. Sellen and S. Whittaker. Beyond total capture: a constructive critique of lifelogging. Commun. ACM, 53:70–77, May 2010.