User:Bookuporshutup/sandbox

= Web Archiving = Web archiving is the process of collecting portions of the World Wide Web to ensure the information is preserved in an archive for future researchers, historians, and the public. Web archivists typically employ web crawlers for automated capture due to the massive amount of information on the Web. The largest web archiving organization based on a bulk crawling approach is the Internet Archive, which strives to maintain an archive of the entire Web.

These developments, and the growing portion of human culture created and recorded on the web, combine to make it inevitable that more and more libraries and archives will have to face the challenges of web archiving. National libraries, national archives and various consortia of organizations are also involved in archiving culturally important Web content.

Commercial web archiving software and services are also available to organizations who need to archive their own web content for corporate heritage, regulatory, or legal purposes.

History and development
While curation and organization of the web has been prevalent since the mid- to late-1990s, one of the first large-scale web archiving project was the Internet Archive, a non-profit organization created by Brewster Kahle in 1996. The Internet Archive released its own search engine for viewing archived web content, the Wayback Machine, in 2001. As of 2018, the Internet Archive was home to 40 petabytes of data. The Internet Archive also developed many of its own tools for collecting and storing its data, including Petabox for storing the large amounts of data efficiently and safely, and Hertrix, a web crawler developed in conjunction with the Nordic national libraries. Other projects launched around the same time included Australia's Pandora and Tasmanian web archives and Sweden's Kulturarw3.

From 2001 to 2010, the International Web Archiving Workshop (IWAW) provided a platform to share experiences and exchange ideas. The International Internet Preservation Consortium (IIPC), established in 2003, has facilitated international collaboration in developing standards and open source tools for the creation of web archives.

The now-defunct Internet Memory Foundation was founded in 2004 and founded by the European Commission in order to archive the web in the Europe. This project developed and released many open source tools, such as "rich media capturing, temporal coherence  analysis, spam assessment, and terminology evolution detection." The data from the foundation is now housed by the Internet Archive, but not currently publicly accessible.

Despite the fact that there is no centralized responsibility for its preservation, web content is rapidly becoming the official record. For example, in 2017, the United States Department of Justice affirmed that the government treats the President’s tweets as official statements.

Archiving the web
Web archivists generally archive various types of web content including HTML web pages, style sheets, JavaScript, images, and video. They also archive metadata about the collected resources such as access time, MIME type, and content length. This metadata is useful in establishing authenticity and provenance of the archived collection.

In addition to large-scale projects that aim to archive certain slices of the web, there are services that offer on-demand web archiving of specific URLs, such as Perma.cc, WebCite, and Archive.is. For example, these services help create persistent links that are especially useful to scholars who are citing web content in their work.

Information Management Life Cycle
 

Creation
[add]

Acquisition
The most common web archiving technique uses web crawlers to automate the process of collecting web pages in a process called remote harvesting. Web crawlers typically access web pages in the same manner that users with a browser see the Web, and therefore provide a comparatively simple method of remote harvesting web content. Examples of web crawlers used for web archiving include Heritrix, HTTrack, and Wget.

Another method, database archiving, archives the underlying content of database-driven websites. It typically requires the extraction of the database content into a standard schema, often using XML. Once stored in that standard format, the archived content of multiple databases can then be made available using a single access system. This approach is exemplified by the DeepArc and Xinq tools developed by the Bibliothèque Nationale de France and the National Library of Australia respectively. DeepArc enables the structure of a relational database to be mapped to an XML schema, and the content exported into an XML document. Xinq then allows that content to be delivered online. Although the original layout and behavior of the website cannot be preserved exactly, Xinq does allow the basic querying and retrieval functionality to be replicated.

Transactional archiving is an event-driven approach, which collects the actual transactions which take place between a web server and a web browser. It is primarily used as a means of preserving evidence of the content which was actually viewed on a particular website, on a given date. This may be particularly important for organizations which need to comply with legal or regulatory requirements for disclosing and retaining information. A transactional archiving system typically operates by intercepting every HTTP request to, and response from, the web server, filtering each response to eliminate duplicate content, and permanently storing the responses as bitstreams.

Cataloging/Identification
[add]

Storage
[add]

Preservation and Access
[add]

Crawlers
[change from list to paragraphs]

Web archives which rely on web crawling as their primary means of collecting the Web are influenced by the difficulties of web crawling:


 * The robots exclusion protocol may request crawlers not access portions of a website. Some web archivists may ignore the request and crawl those portions anyway.
 * Large portions of a web site may be hidden in the Deep Web. For example, the results page behind a web form can lie in the Deep Web if crawlers cannot follow a link to the results page.
 * Crawler traps (e.g., calendars) may cause a crawler to download an infinite number of pages, so crawlers are usually configured to limit the number of dynamic pages they crawl.
 * Most of the archiving tools do not capture the page as it is. It is observed that ad banners and images are often missed while archiving.

However, it is important to note that a native format web archive, i.e., a fully browsable web archive, with working links, media, etc., is only really possible using crawler technology.

The Web is so large that crawling a significant portion of it takes a large number of technical resources. The Web is changing so fast that portions of a website may change before a crawler has even finished crawling it.

General limitations
Some web servers are configured to return different pages to web archiver requests than they would in response to regular browser requests. This is typically done to fool search engines into directing more user traffic to a website, and is often done to avoid accountability, or to provide enhanced content only to those browsers that can display it.

Not only must web archivists deal with the technical challenges of web archiving, they must also contend with intellectual property laws. Peter Lyman states that "although the Web is popularly regarded as a public domain resource, it is copyrighted; thus, archivists have no legal right to copy the Web". However national libraries in some countries have a legal right to copy portions of the web under an extension of a legal deposit.

Some private non-profit web archives that are made publicly accessible like WebCite, the Internet Archive or the Internet Memory Foundation allow content owners to hide or remove archived content that they do not want the public to have access to. Other web archives are only accessible from certain locations or have regulated usage. WebCite cites a recent lawsuit against Google's caching, which Google won.

Laws
In 2017 Financial Industry and Regulatory authority(FINRA) released a notice stating all the business doing digital communications are required to keep a record. This includes Website data, Social media posts, and Messages.

[either flesh out or relocate - possibly to a "why" or purpose section] = Native American Fashion = Historical clothing of Native American peoples has been collected and displayed by curators of major museums with a focus on pre-20th century attire. For the most part, these collections failed to take into consideration the shift in clothing trends among indigenous peoples brought about by assimilation policies or by access to tailoring training and industrially produced textiles. However, indigenous-focused museums have featured exhibitions of contemporary Native fashion. For example, the National Museum of the American Indian in New York City's 2017 "Native Fashion Now" exhibit featured Project Runway finalist Patricia Michaels and The Museum of Indian Arts and Culture in Santa Fe held exhibits as early as 2007 on Native couture and Institute of American Indian Arts founder Lloyd Kiva New.

While Native peoples have always produced clothing, traditionally the garments they made were for personal or ceremonial use.[3] However, forced assimilation policies throughout the nineteenth and early twentieth centuries focused on eradicating Native American culture, including religious observance, language, and other traditional practices. Later, policies such as the 1934 Indian Reorganization Act changed the strategy for education of Native peoples, encouraging them instead to reconnect with their cultures, including the creation of traditional dress.

In 1942, the American anthropologist Frederic H. Douglas, sought to highlight the beauty of Native American fashion by presenting a fashion show featuring garments made by Native Americans between 1830 and 1950.[4] During the same decade, Lloyd Kiva New, a Cherokee who had graduated from the Art Institute of Chicago began touring throughout Europe and the United States with clothing and accessory lines he had designed, using hand-woven and dyed fabrics and leather crafts. In 1945, New opened a studio in Scottsdale, Arizona, with financial backing from Douglas,[5] which initially focused on belts, hats and purses. Influenced by Navajo medicine bags, his purses, decorated with hand-worked metals became a specialty.[6] Recognizing the need to reduce labor costs, he began combining machine work with handcrafting and instituted an apprenticeship program to meet increasing production demands while gearing his designs for the up-scale market.[7]

= USOC - ENGL 1410 - Martineau = In 2018, the United States Olympic Committee came under fire for its complicity in the sexual assault and abuse of women and girls at the hands of former USA Gymnastics national team doctor Larry Nasar. Olympian Aly Raisman released a public statement accusing the committee of failing "to acknowledge its role in this mess." In the wake of Nasar's convictions, more than 150 lawsuits are pending against people and institutions related to the case, including the USOC.

= Immigration - ENGL 1410 - Baldoni = While undocumented students in Colorado are eligible for instate tuition through the ASSET program, Senator Cory Gardner has been vocal about his opposition to this policy, instead supporting the passage of "meaningful immigration reform."