Difference between revisions of "Dropbox Crawler"

From SimpleWiki
Jump to navigationJump to search
Line 10: Line 10:
 
Lauch Java Application
 
Lauch Java Application
  
----
 
  
  
Line 39: Line 38:
  
 
Traces
 
Traces
 +
 +
----
 +
 
As soon as possible, we will make our logs public.
 
As soon as possible, we will make our logs public.
  
Line 57: Line 59:
  
 
Format
 
Format
 +
 +
----
  
 
All files are in a simple format. Each line has files attributes, separeted by #.
 
All files are in a simple format. Each line has files attributes, separeted by #.
Line 76: Line 80:
  
 
Crawler Source Code (java)
 
Crawler Source Code (java)
 +
 +
----
  
 
Download the Java Source Code to Capture Files Information
 
Download the Java Source Code to Capture Files Information
Line 82: Line 88:
  
 
Previous Work
 
Previous Work
 +
 +
----
 +
 
You may find DropBox information on our previous work
 
You may find DropBox information on our previous work
  
Line 92: Line 101:
  
 
External Link
 
External Link
 +
 +
----
  
 
Conference Website
 
Conference Website

Revision as of 19:31, 3 January 2013

We are crawling DropBox information to help our research.

It is very important to us to know the DropBox user file pattern. For example, how big files are and which kind of file users store on DropBox.

To Run our crawler, you may try do load it directly from our page, clicking here:


or You may download the Jar package and run it (double click on most OS or java -jar HelpOurResearch.jar)

Lauch Java Application


We ensure that:

All data we collect are anonymized. We do not copy any file content. We do not collect any personal information and file/dir names.


We also will make our data publicity in a near future. Thus, anyone will be able to use this important data source.

What we do:

We will read all your DropBox Folder; We will collect basic statistics (log format can be viewed in the following); We will send these statistics to our web server.


What we DO NOT do:

We do not copy any file content; We do not copy file or folder name; We do not copy any personal information; We do not install or store anything in your computer.


Traces


As soon as possible, we will make our logs public.

These datasets were captured from Jan. 3, 2013 to (not yet defined).

Acceptable Use Policy (to use our logs in future)

The user must not attempt to reverse engineer the anonymization procedure used to protect the data.

If noticing vulnerabilities in the anonymization procedure the user is kindly asked to inform the repository administrators.

When writing a paper using this data, please cite:

@inproceedings{

}


Format


All files are in a simple format. Each line has files attributes, separeted by #.

The following columns are found in these traces:

############################################################################ # # # Short description # Unit # Long description # ############################################################################ # 1 # # Lenght # - # File Size in Bytes # 2 # # Modified # - # Last modification on file (Unix date/time format) # 3 # # MIME # - # File Mime Type using Magic Java Unit # 4 # # EXTENSION # - # File extension (substring after the last "." on the string) # 5 # # MD5 # - # MD5 hash code of the initial/final 8 bytes of the file. # 6 # # MD5 of the name # - # MD5 hash code of file name string. ############################################################################

Crawler Source Code (java)


Download the Java Source Code to Capture Files Information The Project may be used direct in NetBeans, version 7.2.1


Previous Work


You may find DropBox information on our previous work

Drago, I. and Mellia, M. and Munafò, M. M. and Sperotto, A. and Sadre, R. and Pras, A. (2012) Inside Dropbox: Understanding Personal Cloud Storage Services. Proceedings of the 12th ACM Internet Measurement Conference - IMC'12, Boston, Nov. 2012

As described in the paper, the data was captured at 4 vantage points in 2 European countries. The first 4 files were collected from March 24, 2012 to May 5, 2012. A second dataset was collected in Campus 1 in June and July 2012 to complement the analysis.

The data was captured using Tstat: An open source monitoring tool developed at Politecnico di Torino. Tstat exports flow data containing more than 100 metrics. The source code of Tstat can be obtained from here. More information about the DN-Hunter version of Tstat, needed for some experiments, can be found here. Note that all IP addresses in the datasets are anonymized


External Link


Conference Website