……….…..

>>>
























Sunday, December 31, 2023

Social network

From Wikipedia, the free encyclopedia
(Redirected from Social networking)
This article is about the theoretical concept as used in the social and behavioral sciences. For social networking sites, see Social networking service. For the 2010 movie, see The Social Network. For other uses, see Social network (disambiguation).
Evolution graph of a social network: Barabási model.

A social network is a social structure made up of a set of social actors (such as individuals or organizations), sets of dyadic ties, and other social interactions between actors. The social network perspective provides a set of methods for analyzing the structure of whole social entities as well as a variety of theories explaining the patterns observed in these structures.[1] The study of these structures uses social network analysis to identify local and global patterns, locate influential entities, and examine network dynamics.

Social networks and the analysis of them is an inherently interdisciplinary academic field which emerged from social psychology, sociology, statistics, and graph theory. Georg Simmel authored early structural theories in sociology emphasizing the dynamics of triads and "web of group affiliations".[2] Jacob Moreno is credited with developing the first sociograms in the 1930s to study interpersonal relationships. These approaches were mathematically formalized in the 1950s and theories and methods of social networks became pervasive in the social and behavioral sciences by the 1980s.[1][3] Social network analysis is now one of the major paradigms in contemporary sociology, and is also employed in a number of other social and formal sciences. Together with other complex networks, it forms part of the nascent field of network science.[4][5]

Overview

The social network is a theoretical construct useful in the social sciences to study relationships between individuals, groups, organizations, or even entire societies (social units, see differentiation). The term is used to describe a social structure determined by such interactions. The ties through which any given social unit connects represent the convergence of the various social contacts of that unit. This theoretical approach is, necessarily, relational. An axiom of the social network approach to understanding social interaction is that social phenomena should be primarily conceived and investigated through the properties of relations between and within units, instead of the properties of these units themselves. Thus, one common criticism of social network theory is that individual agency is often ignored[6] although this may not be the case in practice (see agent-based modeling). Precisely because many different types of relations, singular or in combination, form these network configurations, network analytics are useful to a broad range of research enterprises. In social science, these fields of study include, but are not limited to anthropology, biology, communication studies, economics, geography, information science, organizational studies, social psychology, sociology, and sociolinguistics.

History

In the late 1890s, both Émile Durkheim and Ferdinand Tönnies foreshadowed the idea of social networks in their theories and research of social groups. Tönnies argued that social groups can exist as personal and direct social ties that either link individuals who share values and belief (Gemeinschaft, German, commonly translated as "community") or impersonal, formal, and instrumental social links (Gesellschaft, German, commonly translated as "society").[7] Durkheim gave a non-individualistic explanation of social facts, arguing that social phenomena arise when interacting individuals constitute a reality that can no longer be accounted for in terms of the properties of individual actors.[8] Georg Simmel, writing at the turn of the twentieth century, pointed to the nature of networks and the effect of network size on interaction and examined the likelihood of interaction in loosely knit networks rather than groups.[9]

Moreno's sociogram of a 2nd grade class

Major developments in the field can be seen in the 1930s by several groups in psychology, anthropology, and mathematics working independently.[6][10][11] In psychology, in the 1930s, Jacob L. Moreno began systematic recording and analysis of social interaction in small groups, especially classrooms and work groups (see sociometry). In anthropology, the foundation for social network theory is the theoretical and ethnographic work of Bronislaw Malinowski,[12] Alfred Radcliffe-Brown,[13][14] and Claude Lévi-Strauss.[15] A group of social anthropologists associated with Max Gluckman and the Manchester School, including John A. Barnes,[16] J. Clyde Mitchell and Elizabeth Bott Spillius,[17][18] often are credited with performing some of the first fieldwork from which network analyses were performed, investigating community networks in southern Africa, India and the United Kingdom.[6] Concomitantly, British anthropologist S. F. Nadel codified a theory of social structure that was influential in later network analysis.[19] In sociology, the early (1930s) work of Talcott Parsons set the stage for taking a relational approach to understanding social structure.[20][21] Later, drawing upon Parsons' theory, the work of sociologist Peter Blau provides a strong impetus for analyzing the relational ties of social units with his work on social exchange theory.[22][23][24]

By the 1970s, a growing number of scholars worked to combine the different tracks and traditions. One group consisted of sociologist Harrison White and his students at the Harvard University Department of Social Relations. Also independently active in the Harvard Social Relations department at the time were Charles Tilly, who focused on networks in political and community sociology and social movements, and Stanley Milgram, who developed the "six degrees of separation" thesis.[25] Mark Granovetter[26] and Barry Wellman[27] are among the former students of White who elaborated and championed the analysis of social networks.[26][28][29][30]

Beginning in the late 1990s, social network analysis experienced work by sociologists, political scientists, and physicists such as Duncan J. Watts, Albert-László Barabási, Peter Bearman, Nicholas A. Christakis, James H. Fowler, and others, developing and applying new models and methods to emerging data available about online social networks, as well as "digital traces" regarding face-to-face networks.

Levels of analysis

Self-organization of a network, based on Nagler, Levina, & Timme (2011)[31] Centrality

In general, social networks are self-organizing, emergent, and complex, such that a globally coherent pattern appears from the local interaction of the elements that make up the system.[32][33] These patterns become more apparent as network size increases. However, a global network analysis[34] of, for example, all interpersonal relationships in the world is not feasible and is likely to contain so much information as to be uninformative. Practical limitations of computing power, ethics and participant recruitment and payment also limit the scope of a social network analysis.[35][36] The nuances of a local system may be lost in a large network analysis, hence the quality of information may be more important than its scale for understanding network properties. Thus, social networks are analyzed at the scale relevant to the researcher's theoretical question. Although levels of analysis are not necessarily mutually exclusive, there are three general levels into which networks may fall: micro-level, meso-level, and macro-level

--
You received this message because you are subscribed to the Google Groups "1top-oldtattoo-2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 1top-oldtattoo-2+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/1top-oldtattoo-2/CA%2BqAbtFtZ4ui-oENUrmcrgNvaTfLj9oj_Y-MfQPY5yds5eaFgw%40mail.gmail.com.

Web 1.0

Web 1.0

Web 1.0 is a retronym referring to the first stage of the World Wide Web's evolution, from roughly 1989 to 2004. According to Graham Cormode and Balachander Krishnamurthy, "content creators were few in Web 1.0 with the vast majority of users simply acting as consumers of content".[14] Personal web pages were common, consisting mainly of static pages hosted on ISP-run web servers, or on free web hosting services such as Tripod and the now-defunct GeoCities.[15][16] With Web 2.0, it became common for average web users to have social-networking profiles (on sites such as Myspace and Facebook) and personal blogs (sites like Blogger, Tumblr and LiveJournal) through either a low-cost web hosting service or through a dedicated host. In general, content was generated dynamically, allowing readers to comment directly on pages in a way that was not common previously.[citation needed]

Some Web 2.0 capabilities were present in the days of Web 1.0, but were implemented differently. For example, a Web 1.0 site may have had a guestbook page for visitor comments, instead of a comment section at the end of each page (typical of Web 2.0). During Web 1.0, server performance and bandwidth had to be considered—lengthy comment threads on multiple pages could potentially slow down an entire site. Terry Flew, in his third edition of New Media, described the differences between Web 1.0 and Web 2.0 as a

"move from personal websites to blogs and blog site aggregation, from publishing to participation, from web content as the outcome of large up-front investment to an ongoing and interactive process, and from content management systems to links based on "tagging" website content using keywords (folksonomy)."

Flew believed these factors formed the trends that resulted in the onset of the Web 2.0 "craze".[17]

Characteristics

Some common design elements of a Web 1.0 site include:[18]

Web 2.0

The term "Web 2.0" was coined by Darcy DiNucci, an information architecture consultant, in her January 1999 article "Fragmented Future":[3][21]

"The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will [...] appear on your computer screen, [...] on your TV set [...] your car dashboard [...] your cell phone [...] hand-held game machines [...] maybe even your microwave oven."

Writing when Palm Inc. introduced its first web-capable personal digital assistant (supporting Web access with WAP), DiNucci saw the Web "fragmenting" into a future that extended beyond the browser/PC combination it was identified with. She focused on how the basic information structure and hyper-linking mechanism introduced by HTTP would be used by a variety of devices and platforms. As such, her "2.0" designation refers to the next version of the Web that does not directly relate to the term's current use.

The term Web 2.0 did not resurface until 2002.[22][23][24] Companies such as Amazon, Facebook, Twitter, and Google, made it easy to connect and engage in online transactions. Web 2.0 introduced new features, such as multimedia content and interactive web applications, which mainly consisted of two-dimensional screens.[25] Kinsley and Eric focus on the concepts currently associated with the term where, as Scott Dietzen puts it, "the Web becomes a universal, standards-based integration platform".[24] In 2004, the term began to popularize when O'Reilly Media and MediaLive hosted the first Web 2.0 conference. In their opening remarks, John Battelle and Tim O'Reilly outlined their definition of the "Web as Platform", where software applications are built upon the Web as opposed to upon the desktop. The unique aspect of this migration, they argued, is that "customers are building your business for you".[26] They argued that the activities of users generating content (in the form of ideas, text, videos, or pictures) could be "harnessed" to create value. O'Reilly and Battelle contrasted Web 2.0 with what they called "Web 1.0". They associated this term with the business models of Netscape and the Encyclopædia Britannica Online. For example,

"Netscape framed 'the web as platform' in terms of the old software paradigm: their flagship product was the web browser, a desktop application, and their strategy was to use their dominance in the browser market to establish a market for high-priced server products. Control over standards for displaying content and applications in the browser would, in theory, give Netscape the kind of market power enjoyed by Microsoft in the PC market. Much like the 'horseless carriage' framed the automobile as an extension of the familiar, Netscape promoted a 'webtop' to replace the desktop, and planned to populate that webtop with information updates and applets pushed to the webtop by information providers who would purchase Netscape servers.[27]"

In short, Netscape focused on creating software, releasing updates and bug fixes, and distributing it to the end users. O'Reilly contrasted this with Google, a company that did not, at the time, focus on producing end-user software, but instead on providing a service based on data, such as the links that Web page authors make between sites. Google exploits this user-generated content to offer Web searches based on reputation through its "PageRank" algorithm. Unlike software, which undergoes scheduled releases, such services are constantly updated, a process called "the perpetual beta". A similar difference can be seen between the Encyclopædia Britannica Online and Wikipedia – while the Britannica relies upon experts to write articles and release them periodically in publications, Wikipedia relies on trust in (sometimes anonymous) community members to constantly write and edit content. Wikipedia editors are not required to have educational credentials, such as degrees, in the subjects in which they are editing. Wikipedia is not based on subject-matter expertise, but rather on an adaptation of the open source software adage "given enough eyeballs, all bugs are shallow". This maxim is stating that if enough users are able to look at a software product's code (or a website), then these users will be able to fix any "bugs" or other problems. The Wikipedia volunteer editor community produces, edits, and updates articles constantly. Web 2.0 conferences have been held every year since 2004, attracting entrepreneurs, representatives from large companies, tech experts and technology reporters.

The popularity of Web 2.0 was acknowledged by 2006 TIME magazine Person of The Year (You).[28] That is, TIME selected the masses of users who were participating in content creation on social networks, blogs, wikis, and media sharing sites.

In the cover story, Lev Grossman explains:

"It's a story about community and collaboration on a scale never seen before. It's about the cosmic compendium of knowledge Wikipedia and the million-channel people's network YouTube and the online metropolis MySpace. It's about the many wresting power from the few and helping one another for nothing and how that will not only change the world but also change the way the world changes."

Characteristics

Instead of merely reading a Web 2.0 site, a user is invited to contribute to the site's content by commenting on published articles, or creating a user account or profile on the site, which may enable increased participation. By increasing emphasis on these already-extant capabilities, they encourage users to rely more on their browser for user interface, application software ("apps") and file storage facilities. This has been called "network as platform" computing.[5] Major features of Web 2.0 include social networking websites, self-publishing platforms (e.g., WordPress' easy-to-use blog and website creation tools), "tagging" (which enables users to label websites, videos or photos in some fashion), "like" buttons (which enable a user to indicate that they are pleased by online content), and social bookmarking.

Users can provide the data and exercise some control over what they share on a Web 2.0 site.[5][29] These sites may have an "architecture of participation" that encourages users to add value to the application as they use it.[4][5] Users can add value in many ways, such as uploading their own content on blogs, consumer-evaluation platforms (e.g. Amazon and eBay), news websites (e.g. responding in the comment section), social networking services, media-sharing websites (e.g. YouTube and Instagram) and collaborative-writing projects.[30] Some scholars argue that cloud computing is an example of Web 2.0 because it is simply an implication of computing on the Internet.[31]

Edit box interface through which anyone could edit a Wikipedia article

Web 2.0 offers almost all users the same freedom to contribute,[32] which can lead to effects that are varyingly perceived as productive by members of a given community or not, which can lead to emotional distress and disagreement. The impossibility of excluding group members who do not contribute to the provision of goods (i.e., to the creation of a user-generated website) from sharing the benefits (of using the website) gives rise to the possibility that serious members will prefer to withhold their contribution of effort and "free ride" on the contributions of others.[33] This requires what is sometimes called radical trust by the management of the Web site.

Encyclopaedia Britannica calls Wikipedia "the epitome of the so-called Web 2.0" and describes what many view as the ideal of a Web 2.0 platform as "an egalitarian environment where the web of social software enmeshes users in both their real and virtual-reality workplaces."[34]

According to Best,[35] the characteristics of Web 2.0 are rich user experience, user participation, dynamic content, metadata, Web standards, and scalability. Further characteristics, such as openness, freedom,[36] and collective intelligence[37] by way of user participation, can also be viewed as essential attributes of Web 2.0. Some websites require users to contribute user-generated content to have access to the website, to discourage "free riding".

A list of ways that people can volunteer to improve Mass Effect Wiki on Wikia, an example of content generated by users working collaboratively

The key features of Web 2.0 include:[citation needed]

  1. Folksonomy – free classification of information; allows users to collectively classify and find information (e.g. "tagging" of websites, images, videos or links)
  2. Rich user experience – dynamic content that is responsive to user input (e.g., a user can "click" on an image to enlarge it or find out more information)
  3. User participation – information flows two ways between the site owner and site users by means of evaluation, review, and online commenting. Site users also typically create user-generated content for others to see (e.g., Wikipedia, an online encyclopedia that anyone can write articles for or edit)
  4. Software as a service (SaaS) – Web 2.0 sites developed APIs to allow automated usage, such as by a Web "app" (software application) or a mashup
  5. Mass participation – near-universal web access leads to differentiation of concerns, from the traditional Internet user base (who tended to be hackers and computer hobbyists) to a wider variety of users, drastically changing the audience of internet users.

Technologies

The client-side (Web browser) technologies used in Web 2.0 development include Ajax and JavaScript frameworks. Ajax programming uses JavaScript and the Document Object Model (DOM) to update selected regions of the page area without undergoing a full page reload. To allow users to continue interacting with the page, communications such as data requests going to the server are separated from data coming back to the page (asynchronously).

Otherwise, the user would have to routinely wait for the data to come back before they can do anything else on that page, just as a user has to wait for a page to complete the reload. This also increases the overall performance of the site, as the sending of requests can complete quicker independent of blocking and queueing required to send data back to the client. The data fetched by an Ajax request is typically formatted in XML or JSON (JavaScript Object Notation) format, two widely used structured data formats. Since both of these formats are natively understood by JavaScript, a programmer can easily use them to transmit structured data in their Web application.

When this data is received via Ajax, the JavaScript program then uses the Document Object Model to dynamically update the Web page based on the new data, allowing for rapid and interactive user experience. In short, using these techniques, web designers can make their pages function like desktop applications. For example, Google Docs uses this technique to create a Web-based word processor.

As a widely available plug-in independent of W3C standards (the World Wide Web Consortium is the governing body of Web standards and protocols), Adobe Flash was capable of doing many things that were not possible pre-HTML5. Of Flash's many capabilities, the most commonly used was its ability to integrate streaming multimedia into HTML pages. With the introduction of HTML5 in 2010 and the growing concerns with Flash's security, the role of Flash became obsolete, with browser support ending on December 31, 2020.

In addition to Flash and Ajax, JavaScript/Ajax frameworks have recently become a very popular means of creating Web 2.0 sites. At their core, these frameworks use the same technology as JavaScript, Ajax, and the DOM. However, frameworks smooth over inconsistencies between Web browsers and extend the functionality available to developers. Many of them also come with customizable, prefabricated 'widgets' that accomplish such common tasks as picking a date from a calendar, displaying a data chart, or making a tabbed panel.

On the server-side, Web 2.0 uses many of the same technologies as Web 1.0. Languages such as Perl, PHP, Python, Ruby, as well as Enterprise Java (J2EE) and Microsoft.NET Framework, are used by developers to output data dynamically using information from files and databases. This allows websites and web services to share machine readable formats such as XML (Atom, RSS, etc.) and JSON. When data is available in one of these formats, another website can use it to integrate a portion of that site's functionality.

Concepts

Web 2.0 can be described in three parts:

  • Rich web application — defines the experience brought from desktop to browser, whether it is "rich" from a graphical point of view or a usability/interactivity or features point of view.[contradictory]
  • Web-oriented architecture (WOA) — defines how Web 2.0 applications expose their functionality so that other applications can leverage and integrate the functionality providing a set of much richer applications. Examples are feeds, RSS feeds, web services, mashups.
  • Social Web — defines how Web 2.0 websites tend to interact much more with the end user and make the end user an integral part of the website, either by adding his or her profile, adding comments on content, uploading new content, or adding user-generated content (e.g., personal digital photos).

As such, Web 2.0 draws together the capabilities of client- and server-side software, content syndication and the use of network protocols. Standards-oriented Web browsers may use plug-ins and software extensions to handle the content and user interactions. Web 2.0 sites provide users with information storage, creation, and dissemination capabilities that were not possible in the environment known as "Web 1.0".

Web 2.0 sites include the following features and techniques, referred to as the acronym SLATES by Andrew McAfee:[38]

--
You received this message because you are subscribed to the Google Groups "1top-oldtattoo-2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 1top-oldtattoo-2+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/1top-oldtattoo-2/CA%2BqAbtFRSxLBhfSMfwPseHO-vUByTw3e-3zgBto_Eh423z_v%3Dw%40mail.gmail.com.

New media

New media are communication technologies that enable or enhance interaction between users as well as interaction between users and content.[1] In the middle of the 1990s, the phrase "new media" became widely used as part of a sales pitch for the influx of interactive CD-ROMs for entertainment and education.[2] The new media technologies, sometimes known as Web 2.0, include a wide range of web-related communication tools such as blogs, wikis, online social networking, virtual worlds, and other social media platforms.[3]

The phrase "new media" refers to computational media that share material online and through computers.[4] New media inspire new ways of thinking about older media. Media do not replace one another in a clear, linear succession, instead evolving in a more complicated network of interconnected feedback loops .[5] What is different about new media is how they specifically refashion traditional media and how older media refashion themselves to meet the challenges of new media.[6]

Unless they contain technologies that enable digital generative or interactive processes, broadcast television programs, feature films, magazines, and books are not considered to be new media.[4]

History

In the 1950s, connections between computing and radical art began to grow stronger. It was not until the 1980s that Alan Kay and his co-workers at Xerox PARC began to give the computability of a personal computer to the individual, rather than have a big organization be in charge of this. In the late 1980s and early 1990s, however, we seem to witness a different kind of parallel relationship between social changes and computer design. Although causally unrelated, conceptually, it makes sense that the Cold War and the design of the Web took place at exactly the same time.[4]

Writers and philosophers such as Marshall McLuhan were instrumental in the development of media theory during this period which is now famous declaration in Understanding Media: The Extensions of Man, that "the medium is the message" drew attention to the too often ignored influence media and technology themselves, rather than their "content," have on humans' experience of the world and on society broadly.

Until the 1980s, media relied primarily upon print and analog broadcast models such as television and radio. The last twenty-five years have seen the rapid transformation into media which are predicated upon the use of digital technologies such as the Internet and video games. However, these examples are only a small representation of new media. The use of digital computers has transformed the remaining 'old' media, as suggested by the advent of digital television and online publications. Even traditional media forms such as the printing press have been transformed through the application of technologies by using of image manipulation software like Adobe Photoshop and desktop publishing tools.

Andrew L. Shapiro argues that the "emergence of new, digital technologies signals a potentially radical shift of who is in control of information, experience and resources".[7] W. Russell Neuman suggests that whilst the "new media" have technical capabilities to pull in one direction, economic and social forces pull back in the opposite direction. According to Neuman, "We are witnessing the evolution of a universal interconnected network of audio, video, and electronic text communications that will blur the distinction between interpersonal and mass communication; and between public and private communication".[8] Neuman argues that new media will:

  • Alter the meaning of geographic distance.
  • Allow for a huge increase in the volume of communication.
  • Provide the possibility of increasing the speed of communication.
  • Provide opportunities for interactive communication.
  • Allow forms of communication that were previously separate to overlap and interconnect.

Consequently, it has been the contention of scholars such as Douglas Kellner and James Bohman that new media and particularly the Internet will provide the potential for a democratic postmodern public sphere, in which citizens can participate in well informed, non-hierarchical debate pertaining to their social structures. Contradicting these positive appraisals of the potential social impacts of new media are scholars such as Edward S. Herman and Robert McChesney who have suggested that the transition to new media has seen a handful of powerful transnational telecommunications corporations who achieve a level of global influence which was hitherto unimaginable.

Scholars have highlighted both the positive and negative potential and actual implications of new media technologies, suggesting that some of the early work into new media studies was guilty of technological determinism – whereby the effects of media were determined by the technology themselves, rather than through tracing the complex social networks which governed the development, funding, implementation and future development of any technology.

Based on the argument that people have a limited amount of time to spend on the consumption of different media, displacement theory argue that the viewership or readership of one particular outlet leads to the reduction in the amount of time spent by the individual on another. The introduction of new media, such as the internet, therefore reduces the amount of time individuals would spend on existing "old" media, which could ultimately lead to the end of such traditional media.[9]

Definition

Although, there are several ways that new media may be described, Lev Manovich, in an introduction to The New Media Reader, defines new media by using eight propositions:[4]

  1. New media versus cyberculture – Cyberculture is the various social phenomena that are associated with the Internet and network communications (blogs, online multi-player gaming), whereas new media is concerned more with cultural objects and paradigms (digital to analog television, smartphones).
  2. New media as computer technology used as a distribution platform – New media are the cultural objects which use digital computer technology for distribution and exhibition. e.g. (at least for now) Internet, Web sites, computer multimedia, Blu-ray disks etc. The problem with this is that the definition must be revised every few years. The term "new media" will not be "new" anymore, as most forms of culture will be distributed through computers.
  3. New media as digital data controlled by software – The language of new media is based on the assumption that, in fact, all cultural objects that rely on digital representation and computer-based delivery do share a number of common qualities. New media is reduced to digital data that can be manipulated by software as any other data. Now media operations can create several versions of the same object. An example is an image stored as matrix data which can be manipulated and altered according to the additional algorithms implemented, such as color inversion, gray-scaling, sharpening, rasterizing, etc.
  4. New media as the mix between existing cultural conventions and the conventions of software – New media today can be understood as the mix between older cultural conventions for data representation, access, and manipulation and newer conventions of data representation, access, and manipulation. The "old" data are representations of visual reality and human experience, and the "new" data is numerical data. The computer is kept out of the key "creative" decisions, and is delegated to the position of a technician. e.g. In film, software is used in some areas of production, in others are created using computer animation.
  5. New media as the aesthetics that accompanies the early stage of every new modern media and communication technology – While ideological tropes indeed seem to be reappearing rather regularly, many aesthetic strategies may reappear two or three times ... In order for this approach to be truly useful it would be insufficient to simply name the strategies and tropes and to record the moments of their appearance; instead, we would have to develop a much more comprehensive analysis which would correlate the history of technology with social, political, and economical histories or the modern period.
  6. New media as faster execution of algorithms previously executed manually or through other technologies – Computers are a huge speed-up of what were previously manual techniques. e.g. calculators. Dramatically speeding up the execution makes possible previously non-existent representational technique. This also makes possible of many new forms of media art such as interactive multimedia and video games. On one level, a modern digital computer is just a faster calculator, we should not ignore its other identity: that of a cybernetic control device.
  7. New media as the encoding of modernist avant-garde; new media as metamedia – Manovich declares that the 1920s are more relevant to new media than any other time period. Metamedia coincides with postmodernism in that they both rework old work rather than create new work. New media avant-garde is about new ways of accessing and manipulating information (e.g. hypermedia, databases, search engines, etc.). Meta-media is an example of how quantity can change into quality as in new media technology and manipulation techniques can recode modernist aesthetics into a very different postmodern aesthetics.
  8. New media as parallel articulation of similar ideas in post–World War II art and modern computing – Post-WWII art or "combinatorics" involves creating images by systematically changing a single parameter. This leads to the creation of remarkably similar images and spatial structures. This illustrates that algorithms, this essential part of new media, do not depend on technology, but can be executed by humans.

Globalization

The rise of new media has increased communication between people all over the world and the Internet. It has allowed people to express themselves through blogs, websites, videos, pictures, and other user-generated media.

Terry Flew stated that as new technologies develop, the world becomes more globalized. Globalization is more than the development of activities throughout the world, globalization allows the world to be connected no matter the distance from user to user[10] and Frances Cairncross expresses this great development as the "death of distance".[11] New media has established the importance of making friendships through digital social places more prominent than in physical places.[12] Globalization is generally stated as "more than expansion of activities beyond the boundaries of particular nation states".[13] New media "radically break the connection between physical place and social place, making physical location much less significant for our social relationships".[12]

However, the changes in the new media environment create a series of tensions in the concept of "public sphere".[14] According to Ingrid Volkmer, "public sphere" is defined as a process through which public communication becomes restructured and partly disembedded from national political and cultural institutions.[15] This trend of the globalized public sphere is not only as a geographical expansion form a nation to worldwide, but also changes the relationship between the public, the media and state.[15]

"Virtual communities" are being established online and transcend geographical boundaries, eliminating social restrictions.[16] Howard Rheingold describes these globalized societies as self-defined networks, which resemble what we do in real life. "People in virtual communities use words on screens to exchange pleasantries and argue, engage in intellectual discourse, conduct commerce, make plans, brainstorm, gossip, feud, fall in love, create a little high art and a lot of idle talk".[17] For Sherry Turkle "making the computer into a second self, finding a soul in the machine, can substitute for human relationships".[18] New media has the ability to connect like-minded others worldwide.

While this perspective suggests that the technology drives – and therefore is a determining factor – in the process of globalization, arguments involving technological determinism are generally frowned upon by mainstream media studies.[19][20][21] Instead academics focus on the multiplicity of processes by which technology is funded, researched and produced, forming a feedback loop when the technologies are used and often transformed by their users, which then feeds into the process of guiding their future development.

While commentators such as Manuel Castells[22] espouse a "soft determinism"[21] whereby they contend that "Technology does not determine society. Nor does society script the course of technological change, since many factors, including individual inventiveness and entrepreneurialism, intervene in the process of scientific discovery, technical innovation and social applications, so the final outcome depends on a complex pattern of interaction. Indeed the dilemma of technological determinism is probably a false problem, since technology is society and society cannot be understood without its technological tools".[22] This, however, is still distinct from stating that societal changes are instigated by technological development, which recalls the theses of Marshall McLuhan.[23][24]

Manovich[25] and Castells[22] have argued that whereas mass media "corresponded to the logic of industrial mass society, which values conformity over individuality,"[26] new media follows the logic of the postindustrial or globalized society whereby "every citizen can construct her own custom lifestyle and select her ideology from a large number of choices. Rather than pushing the same objects to a mass audience, marketing now tries to target each individual separately".[26]

The evolution of virtual communities highlighted many aspects of the real world. Tom Boellstorff's studies of Second Life discuss a term known as "griefing." In Second Life griefing means to consciously upset another user during their experience of the game.[27] Other users also posed situations of their avatar being raped and sexually harassed. In the real world, these same types of actions are carried out. Virtual communities are a clear demonstration of new media through means of new technological developments.

Anthropologist Daniel Miller and sociologist Don Slater discussed online Trinidad culture on online networks through the use of ethnographic studies. The study argues that internet culture does exist and this version of new media cannot eliminate people's relations to their geographic area or national identity. The focus on Trini culture specifically demonstrated the importance of what Trini values and beliefs existed within the page while also representing their identities on the web.[28]

As tool for social change

Social movement media has a rich and storied history (see Agitprop) that has changed at a rapid rate since new media became widely used.[29] The Zapatista Army of National Liberation of Chiapas, Mexico were the first major movement to make widely recognized and effective use of new media for communiques and organizing in 1994.[29] Since then, new media has been used extensively by social movements to educate, organize, share cultural products of movements, communicate, coalition build, and more. The WTO Ministerial Conference of 1999 protest activity was another landmark in the use of new media as a tool for social change. The WTO protests used media to organize the original action, communicate with and educate participants, and was used as an alternative media source.[30] The Indymedia movement also developed out of this action, and has been a great tool in the democratization of information, which is another widely discussed aspect of new media movement.[31] Some scholars even view this democratization as an indication of the creation of a "radical, socio-technical paradigm to challenge the dominant, neoliberal and technologically determinist model of information and communication technologies."[32] A less radical view along these same lines is that people are taking advantage of the Internet to produce a grassroots globalization, one that is anti-neoliberal and centered on people rather than the flow of capital.[33] Chanelle Adams, a feminist blogger for the Bi-Weekly webpaper The Media says that in her "commitment to anti-oppressive feminist work, it seems obligatory for her to stay in the know just to remain relevant to the struggle." In order for Adams and other feminists who work towards spreading their messages to the public, new media becomes crucial towards completing this task, allowing people to access a movement's information instantaneously.

Some are also skeptical of the role of new media in social movements. Many scholars point out unequal access to new media as a hindrance to broad-based movements, sometimes even oppressing some within a movement.[34] Others are skeptical about how democratic or useful it really is for social movements, even for those with access.[35]

New media has also found a use with less radical social movements such as the Free Hugs Campaign. Using websites, blogs, and online videos to demonstrate the effectiveness of the movement itself. Along with this example the use of high volume blogs has allowed numerous views and practices to be more widespread and gain more public attention. Another example is the ongoing Free Tibet Campaign, which has been seen on numerous websites as well as having a slight tie-in with the band Gorillaz in their Gorillaz Bitez clip featuring the lead singer 2D sitting with protesters at a Free Tibet protest. Another social change seen coming from New Media is trends in fashion and the emergence of subcultures such as textspeak, Cyberpunk, and various others.

Following trends in fashion and textspeak, New Media also makes way for "trendy" social change. The Ice Bucket Challenge is a recent example of this. All in the name of raising money for ALS (the lethal neurodegenerative disorder also known as Lou Gehrig's disease), participants are nominated by friends via social media such as Facebook and Twitter to dump a bucket of ice water on themselves, or donate to the ALS Foundation. This became a huge trend through Facebook's tagging tool, allowing nominees to be tagged in the post. The videos appeared on more people's feeds, and the trend spread fast. This trend raised over 100 million dollars for the cause and increased donations by 3,500 percent.

A meme, often seen on the internet, is an idea that has been replicated and passed along. Ryan Milner compared this concept to a possible tool for social change. The combination of pictures and texts represent pop polyvocality ("the people's version"). A meme can make more serious conversations less tense while still displaying the situation at sake.[36]

--
You received this message because you are subscribed to the Google Groups "1top-oldtattoo-2" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 1top-oldtattoo-2+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/1top-oldtattoo-2/CA%2BqAbtHM%2BZAB_t9iT%3DQoatZ_Vz2xKJJyVBBYQ7v7gdd5uff4iA%40mail.gmail.com.