Jump to content

Web archiving

From Wikipedia, the free encyclopedia

Web archivingis the process of collecting portions of theWorld Wide Webto ensure the information ispreservedin anarchivefor future researchers, historians, and the public. Web archivists typically employweb crawlersfor automated capture due to the massive size and amount of information on the Web. The largest web archiving organization based on a bulk crawling approach is theWayback Machine,which strives to maintain an archive of the entire Web.

The growing portion of human culture created and recorded on the web makes it inevitable that more and more libraries and archives will have to face the challenges of web archiving.[1]National libraries,national archivesand various consortia of organizations are also involved in archiving culturally important Web content.

Commercial web archiving software and services are also available to organizations who need to archive their own web content for corporate heritage, regulatory, or legal purposes.

History and development[edit]

While curation and organization of the web has been prevalent since the mid- to late-1990s, one of the first large-scale web archiving project was theInternet Archive,a non-profit organization created byBrewster Kahlein 1996.[2]The Internet Archive released its own search engine for viewing archived web content, theWayback Machine,in 2001.[2]As of 2018, the Internet Archive was home to 40 petabytes of data.[3]The Internet Archive also developed many of its own tools for collecting and storing its data, includingPetaBoxfor storing the large amounts of data efficiently and safely, andHeritrix,a web crawler developed in conjunction with the Nordic national libraries.[2]Other projects launched around the same time included a web archiving project by theNational Library of Canada,Australia'sPandora,Tasmanian web archives and Sweden's Kulturarw3.[4][5]

From 2001to 2010,[failed verification]the International Web Archiving Workshop (IWAW) provided a platform to share experiences and exchange ideas.[6][7]TheInternational Internet Preservation Consortium(IIPC), established in 2003, has facilitated international collaboration in developing standards and open source tools for the creation of web archives.[8]

The now-defunctInternet Memory Foundationwas founded in 2004 and founded by theEuropean Commissionin order to archive the web in Europe.[2]This project developed and released many open source tools, such as "rich media capturing, temporal coherence analysis, spam assessment, and terminology evolution detection."[2]The data from the foundation is now housed by the Internet Archive, but not currently publicly accessible.[9]

Despite the fact that there is no centralized responsibility for its preservation, web content is rapidly becoming the official record. For example, in 2017, the United States Department of Justice affirmed that the government treats the President's tweets as official statements.[10]

Methods of collection[edit]

Web archivists generally archive various types of web content includingHTMLweb pages,style sheets,JavaScript,images,andvideo.They also archivemetadataabout the collected resources such as access time,MIME type,and content length. This metadata is useful in establishingauthenticityandprovenanceof the archived collection.

Remote harvesting[edit]

The most common web archiving technique usesweb crawlersto automate the process of collectingweb pages.[5]Web crawlers typically access web pages in the same manner that users with a browser see the Web, and therefore provide a comparatively simple method of remote harvesting web content. Examples of web crawlers used for web archiving include:

There exist various free services which may be used to archive web resources "on-demand", using web crawling techniques. These services include theWayback MachineandWebCite.

Database archiving[edit]

Database archiving refers to methods for archiving the underlying content of database-driven websites. It typically requires the extraction of thedatabasecontent into a standardschema,often usingXML.Once stored in that standard format, the archived content of multiple databases can then be made available using a single access system. This approach is exemplified by the DeepArc[11]and Xinq[12]tools developed by theBibliothèque Nationale de Franceand theNational Library of Australiarespectively. DeepArc enables the structure of arelational databaseto be mapped to anXML schema,and the content exported into an XML document. Xinq then allows that content to be delivered online. Although the original layout and behavior of the website cannot be preserved exactly, Xinq does allow the basic querying and retrieval functionality to be replicated.

Transactional archiving[edit]

Transactional archiving is an event-driven approach, which collects the actual transactions which take place between aweb serverand aweb browser.It is primarily used as a means of preserving evidence of the content which was actually viewed on a particularwebsite,on a given date. This may be particularly important for organizations which need to comply with legal or regulatory requirements for disclosing and retaining information.[13]

A transactional archiving system typically operates by intercepting everyHTTPrequest to, and response from, the web server, filtering each response to eliminate duplicate content, and permanently storing the responses as bitstreams.

Difficulties and limitations[edit]

Crawlers[edit]

Web archives which rely on web crawling as their primary means of collecting the Web are influenced by the difficulties of web crawling:

  • Therobots exclusion protocolmay request crawlers not access portions of a website. Some web archivists may ignore the request and crawl those portions anyway.
  • Large portions of a web site may be hidden in theDeep Web.For example, the results page behind a web form can lie in the Deep Web if crawlers cannot follow a link to the results page.
  • Crawler traps(e.g., calendars) may cause a crawler to download an infinite number of pages, so crawlers are usually configured to limit the number of dynamic pages they crawl.
  • Most of the archiving tools do not capture the page as it is. It is observed that ad banners and images are often missed while archiving.

However, it is important to note that a native format web archive, i.e., a fully browsable web archive, with working links, media, etc., is only really possible using crawler technology.

The Web is so large that crawling a significant portion of it takes a large number of technical resources. Also, the Web is changing so fast that portions of a website may suffer modifications before a crawler has even finished crawling it.

General limitations[edit]

Some web servers are configured to return different pages to web archiver requests than they would in response to regular browser requests. This is typically done to fool search engines into directing more user traffic to a website, and is often done to avoid accountability, or to provide enhanced content only to those browsers that can display it.

Not only must web archivists deal with the technical challenges of web archiving, they must also contend with intellectual property laws. Peter Lyman[14]states that "although the Web is popularly regarded as apublic domainresource, it iscopyrighted;thus, archivists have no legal right to copy the Web ". Howevernational librariesin some countries[15]have a legal right to copy portions of the web under an extension of alegal deposit.

Some private non-profit web archives that are made publicly accessible likeWebCite,theInternet Archiveor theInternet Memory Foundationallow content owners to hide or remove archived content that they do not want the public to have access to. Other web archives are only accessible from certain locations or have regulated usage. WebCite cites a recent lawsuit against Google's caching, whichGooglewon.[16]

Laws[edit]

In 2017 theFinancial Industry Regulatory Authority, Inc.(FINRA), a United States financial regulatory organization, released a notice stating all the business doing digital communications are required to keep a record. This includes website data, social media posts, and messages.[17]Somecopyright lawsmay inhibit Web archiving. For instance, academic archiving bySci-Hubfalls outside the bounds of contemporary copyright law. The site provides enduring access to academic works including those that do not have anopen accesslicense and thereby contributes to the archival of scientific research which may otherwise be lost.[18][19]

See also[edit]

References[edit]

Citations[edit]

  1. ^Truman, Gail (2016)."Web Archiving Environmental Scan".Harvard Library.
  2. ^abcdeToyoda, M.; Kitsuregawa, M. (May 2012)."The History of Web Archiving".Proceedings of the IEEE.100(Special Centennial Issue): 1441–1443.doi:10.1109/JPROC.2012.2189920.ISSN0018-9219.
  3. ^"Inside Wayback Machine, the internet's time capsule".The Hustle.September 28, 2018. sec. Wayyyy back.RetrievedJuly 21,2020.
  4. ^Costa, Miguel; Gomes, Daniel; Silva, Mário J. (September 2017). "The evolution of web archiving".International Journal on Digital Libraries.18(3): 191–205.doi:10.1007/s00799-016-0171-9.S2CID24303455.
  5. ^abConsalvo, Mia; Ess, Charles, eds. (April 2011)."Web Archiving – Between Past, Present, and Future".The Handbook of Internet Studies(1 ed.). Wiley. pp. 24–42.doi:10.1002/9781444314861.ISBN978-1-4051-8588-2.
  6. ^"IWAW 2010: The 10th Intl Web Archiving Workshop".www.wikicfp.com.RetrievedAugust 19,2019.
  7. ^"IWAW - International Web Archiving Workshops".bibnum.bnf.fr.Archived fromthe originalon November 20, 2012.RetrievedAugust 19,2019.
  8. ^"About the IIPC".IIPC.RetrievedApril 17,2022.
  9. ^"Internet Memory Foundation: Free Web: Free Download, Borrow and Streaming".archive.org.Internet Archive.RetrievedJuly 21,2020.
  10. ^Regis, Camille (June 4, 2019)."Web Archiving: Think the Web is Permanent? Think Again".History Associates.RetrievedJuly 14,2019.
  11. ^"DeepArc".deeparc.sourceforge.net.Archivedfrom the original on March 3, 2024.
  12. ^"Xinq [Xml INQuiry]".National Library of Australia.Archived fromthe originalon February 27, 2011.
  13. ^Brown, Adrian (January 10, 2016).Archiving websites: a practical guide for information management professionals.Facet.ISBN978-1-78330-053-2.OCLC1064574312.
  14. ^Lyman (2002)
  15. ^"Legal Deposit | IIPC".netpreserve.org.Archivedfrom the original on March 16, 2017.RetrievedJanuary 31,2017.
  16. ^"WebCite FAQ".Webcitation.org.RetrievedSeptember 20,2018.
  17. ^"Social Media and Digital Communications"(PDF).finra.org.FINRA.
  18. ^Claburn, Thomas (September 10, 2020)."Open access journals are vanishing from the web, Internet Archive stands ready to fill in the gaps".The Register.
  19. ^Laakso, Mikael; Matthias, Lisa; Jahn, Najko (2021). "Open is not forever: A study of vanished open access journals".Journal of the Association for Information Science and Technology.72(9): 1099–1112.arXiv:2008.11933.doi:10.1002/ASI.24460.S2CID221340749.

General bibliography[edit]

External links[edit]