4 Pitfalls To Avoid When Choosing Tech For Your Business 

Maskot | Getty Images

Technology is often thought of as the antidote to business woes. Once you get the right tech in place, the thinking goes, you’ll start doing whatever it is you do much faster, better and more efficiently. The thing about technology, though, is that new advancements hit the market daily.

Just think about how much artificial intelligence has advanced in a very short period of time. We went from Microsoft’s now-iconic Clippy to Google unveiling a chatbot with humanlike tendencies, Apple releasing an augmented reality upgrade to counter smartphone addiction and researchers from Cornell and the University of Pennsylvania developing an autonomous robot that can complete high-level tasks by sensing its surroundings.

Innovations such as these aren’t just fueling competition in the tech industry. They’ve made many companies question their relevance. Some would argue that they’ve led to a full-blown fear of missing out on the latest tech..Continue reading

Source: 4 Pitfalls to Avoid When Choosing Tech for Your Business | Entrepreneur

.

Critics:

Philosophy of technology is a branch of philosophy that studies the “practice of designing and creating artifacts”, and the “nature of the things so created.” It emerged as a discipline over the past two centuries, and has grown “considerably” since the 1970s. The humanities philosophy of technology is concerned with the “meaning of technology for, and its impact on, society and culture”.

Initially, technology was seen as an extension of the human organism that replicated or amplified bodily and mental faculties. Marx framed it as a tool used by capitalists to oppress the proletariat, but believed that technology would be a fundamentally liberating force once it was “freed from societal deformations”.

Second-wave philosophers like Ortega later shifted their focus from economics and politics to “daily life and living in a techno-material culture,” arguing that technology could oppress “even the members of the bourgeoisie who were its ostensible masters and possessors.” Third-stage philosophers like Don Ihde and Albert Borgmann represent a turn toward de-generalization and empiricism, and considered how humans can learn to live with technology.

Early scholarship on technology was split between two arguments: technological determinism, and social construction. Technological determinism is the idea that technologies cause unavoidable social changes. It usually encompasses a related argument, technological autonomy, which asserts that technological progress follows a natural progression and cannot be prevented.

Social constructivists argue that technologies follow no natural progression, and are shaped by cultural values, laws, politics, and economic incentives. Modern scholarship has shifted towards an analysis of sociotechnical systems, “assemblages of things, people, practices, and meanings”, looking at the value judgments that shape technology.

Cultural critic Neil Postman distinguished tool-using societies from technological societies and from what he called “technopolies,” societies that are dominated by an ideology of technological and scientific progress to the detriment of other cultural practices, values, and world views. Herbert Marcuse and John Zerzan suggest that technological society will inevitably deprive us of our freedom and psychological health.

The ethics of technology is an interdisciplinary subfield of ethics that analyzes technology’s ethical implications and explores ways to mitigate the potential negative impacts of new technologies. There is a broad range of ethical issues revolving around technology, from specific areas of focus affecting professionals working with technology to broader social, ethical, and legal issues concerning the role of technology in society and everyday life.

Prominent debates have surrounded genetically modified organisms, the use of robotic soldiers, algorithmic bias, and the issue of aligning AI behavior with human values. Technology ethics encompasses several key fields. Bioethics looks at ethical issues surrounding biotechnologies and modern medicine, including cloning, human genetic engineering, and stem cell research.

Computer ethics focuses on issues related to computing. Cyberethics explores internet-related issues like intellectual property rightsprivacy, and censorshipNanoethics examines issues surrounding the alteration of matter at the atomic and molecular level in various disciplines including computer science, engineering, and biology. And engineering ethics deals with the professional standards of engineers, including software engineers and their moral responsibilities to the public.

A wide branch of technology ethics is concerned with the ethics of artificial intelligence: it includes robot ethics, which deals with ethical issues involved in the design, construction, use, and treatment of robots, as well as machine ethics, which is concerned with ensuring the ethical behavior of artificially intelligent agents. Within the field of AI ethics, significant yet-unsolved research problems include AI alignment (ensuring that AI behaviors are aligned with their creators’ intended goals and interests) and the reduction of algorithmic bias.

Some researchers have warned against the hypothetical risk of an AI takeover, and have advocated for the use of AI capability control in addition to AI alignment methods. Other fields of ethics have had to contend with technology-related issues, including military ethicsmedia ethics, and educational ethics. Futures studies is the systematic and interdisciplinary study of social and technological progress.

It aims to quantitatively and qualitatively explore the range of plausible futures and to incorporate human values in the development of new technologies. More generally, futures researchers are interested in improving “the freedom and welfare of humankind”. It relies on a thorough quantitative and qualitative analysis of past and present technological trends, and attempts to rigorously extrapolate them into the future. Science fiction is often used as a source of ideas.

Futures research methodologies include survey researchmodelingstatistical analysis,  computer simulations & Global catastrophic risk. Existential risk researchers analyze risks that could lead to human extinction or civilizational collapse, and look for ways to build resilience against them. Relevant research centers include the Cambridge Center for the Study of Existential Risk, and the Stanford Existential Risk Initiative.

Future technologies may contribute to the risks of artificial general intelligencebiological warfarenuclear warfarenanotechnologyanthropogenic climate changeglobal warming, or stable global totalitarianism, though technologies may also help us mitigate asteroid impacts and gamma-ray bursts.

In 2019 philosopher Nick Bostrom introduced the notion of a vulnerable world, “one in which there is some level of technological development at which civilization almost certainly gets devastated by default”, citing the risks of a pandemic caused by bioterrorists, or an arms race triggered by the development of novel armaments and the loss of mutual assured destruction.

He invites policymakers to question the assumptions that technological progress is always beneficial, that scientific openness is always preferable, or that they can afford to wait until a dangerous technology has been invented before they prepare mitigations.

Emerging technologies are novel technologies whose development or practical applications are still largely unrealized. They include nanotechnologybiotechnologyrobotics3D printingblockchains, and artificial intelligence. In 2005, futurist Ray Kurzweil claimed the next technological revolution would rest upon advances in geneticsnanotechnology, and robotics, with robotics being the most impactful of the three technologies.

Genetic engineering will allow far greater control over human biological nature through a process called directed evolution. Some thinkers believe that this may shatter our sense of self, and have urged for renewed public debate exploring the issue more thoroughly; others fear that directed evolution could lead to eugenics or extreme social inequality. Nanotechnology will grant us the ability to manipulate matter “at the molecular and atomic scale”, which could allow us to reshape ourselves and our environment in fundamental ways.

Nanobots could be used within the human body to destroy cancer cells or form new body parts, blurring the line between biology and technology.Autonomous robots have undergone rapid progress, and are expected to replace humans at many dangerous tasks, including search and rescuebomb disposalfirefighting, and war.

Estimates on the advent of artificial general intelligence vary, but half of machine learning experts surveyed in 2018 believe that AI will “accomplish every task better and more cheaply” than humans by 2063, and automate all human jobs by 2140.

This expected technological unemployment has led to calls for increased emphasis on computer science education and debates about universal basic income. Political science experts predict that this could lead to a rise in extremism, while others see it as an opportunity to usher in a post-scarcity economy.

Blog at WordPress.com.