Se encontraron 7 coincidencias

por paulcordovav
02 Oct 2020, 00:34
Foros: Inteligencia Artificial
Tema: Clasificación multiclase y visualización de quejas de organismos oficiales en twitter
Respuestas: 0
Vistas: 490

Clasificación multiclase y visualización de quejas de organismos oficiales en twitter

EL artículo aplica minería de texto en los tweets que son considerados quejas, para ello seleccionar los tweets que citan órganos oficiales como Protección civil, policías, etc.

Se obtiene resultados por mejorar, ya que entre sus algoritmos el que obtuvo mejor rendimiento fue el SVM con 92.26%, también es rescatable que la clasificación es para muchas clases.

Lo que sí creo es que debió explicar un poco más es el tema de las fases de lematización y parametrización, el autor menciona que es parte de la metodología, pero no explica su relevancia. Por otro lado también es importante como se manejaría el desbalance de clases, ya que se comenta el uso de clusters para la obtención de clases entre los tipos de quejas existente, y aquellas que no pertenecían a una eran etiquetadas como "no etiquetados", y por ser mayoría dejaba muchas quejas que posiblemente resultaban ser relevante y deberían ser tomadas en cuenta en el análisis.

Lo que sí se valora bastante es que la aplicación permite apreciar las quejas según la ubicación, con esto se puede apreciar algunos patrones por cada lugar y tomar medidas más dirigidas, lo que le facilita esto es el dato del lugar donde se realizó el tweet. Así también se podría emplear el tiempo el que se hizo el tweet y con una mayor cantidad de registros (tweets) encontrar tipo de quejas según horarios, y así tener un mejor análisis a la hora de tomar decisiones.
por paulcordovav
14 Ago 2020, 23:31
Foros: BI & Data Sciences
Tema: ML - Selección de Atributo en Phishing - Clasificación de Tweets
Respuestas: 0
Vistas: 6587

ML - Selección de Atributo en Phishing - Clasificación de Tweets

Paper 1: Phishing Detection Based on Machine Learning and Feature Selection Methods
Paper 2: EVALUACIÓN DE ALGORITMOS DE CLASIFICACIÓN SUPERVISADA PARA EL MINADO DE OPINION EN TWITTER
por paulcordovav
14 Ago 2020, 23:28
Foros: Internet of Things (IoT)
Tema: IoT - Vision Artificial - Asistente Virtual Para Monitorear el Clima
Respuestas: 0
Vistas: 6372

IoT - Vision Artificial - Asistente Virtual Para Monitorear el Clima

Paper 1: Internet de las Cosas y Visión Artificial, Funcionamiento y Aplicaciones: Revisión de Literatura
Paper 2: Uso del asistente virtual Alexa como herramienta de interacción para el monitoreo de clima en hogares inteligentes por medio de Raspberry Pi y DarkSky API
por paulcordovav
31 Jul 2020, 19:30
Foros: Inteligencia Artificial
Tema: Can AI Create Video Games
Respuestas: 0
Vistas: 560

Can AI Create Video Games

https://analyticsindiamag.com/can-ai-cr ... deo-games/

From DeepMind’s AlphaStar (beating 99.8% of the Starcraft players) and AlphaGo to OpenAI’s Dota 2 bot, AI is revolutionising the gaming industry. Each time it beats a human at a video game, it is believed to be getting closer to a level where it can make decisions on its own. And, that’s the reason AI is being put through hours of training so that it can make decisions on its own when it enters the real-world. Of course, playing a game and learning to counter the real-world problem eventually is something AI has always been trained for. AI has also been playing a massive role in creating video games and making it more tailored to players’ preferences.

Matthew Guzdial from the University of Alberta and his team have been working towards leveraging AI’s power to help video gamers create the exact game that they want to play.

Creating A Game Using AI
The approach researchers use is through machine learning, where a system learns to create approximate representations of the games, and recombine the knowledge from these representations in order to develop new games via conceptual expansion. This approach helps in demonstrating the ability of the system to recreate the games.

The team of researchers first fed the machine with the data in the form of videos. This video contained hours of gameplay from humans playing the first levels of games like Super Mario Bros, Kirby’s Adventure, and Mega Man.

After hours of ‘watching’ these video game footage, the AI was able to probabilistically map the relationship between the objects and how that change to generate ‘game graphs’.

When the AI is watching the game, it is also guessing the rules of the game. The AI is made to watch the gameplay, and here, it is validating the rules of the game that it has guessed. The ‘game graphs’ are a result of putting together these two sets of data acquired by watching and rewatching the gameplay. These game graphs mainly contain the details about the game, and the system makes use of the information present in the game graphs and starts to design, combine and reproduce.

To start the testing, the AI will first be trained on the Mega Man and then will be asked to complete the game based on its knowledge and approximations from the game graphs. But, the AI failed to completely recreate the game leaving out some aspects of the games. For example, it was unable to deduce the mechanics of the magnetic beam from the game. But, Super Mario Bros., was simpler for it to understand because it didn’t contain power-ups that had a significant effect on the character.

To create a new game, for example, the researchers combined the platforming styles of Mega Man and Super Mario Bros. Through many repetitions for every level and every game rule, a completely new AI-generated game can be created.

Replace Game Developers?
A common thought arises when anyone reads about these kinds of technology, will it replace human jobs? The answer is no. The researchers believe, instead of replacing, AI will be helping in easing the burden on game developers. These tools do not contain coding, so it gives more accessibility to the game creators by removing the hassle of dealing with codes.

Future of the Technology
Next, the researchers have planned to make the AI system to predict and design the whole game with just two frames and user-defined data. The AI will make predictions, and post the creator gives feedback; the AI then makes the adjustments. Going through this process over and over again will ultimately result in something entirely new. The AI system will be able to automatically produce in-game visuals, sound and also the story in the future.

Outlook
The video gaming industry requires long hours of hard work and expertise in terms of designing and coding skills and data. The coding skill especially presents a problem to non-coding personnel for creating games. This automating of game design will democratise the industry and allow for applications in education, scientific and entertainment.
por paulcordovav
29 Feb 2020, 21:16
Foros: Inteligencia Artificial
Tema: Conoce acerca de los sistemas de reconocimiento de las aplicaciones más famosas
Respuestas: 0
Vistas: 623

Conoce acerca de los sistemas de reconocimiento de las aplicaciones más famosas

Entendiendo lo principal de un Sistema de Recomendación
Un sistema de recomendación es un sistema de filtrado de información que carga información personalizada sobre las preferencias e intereses del usuario, la historia de comportamiento sobre un ítem. Es capaz de predecir las preferencias de un usuario específico sobre un producto, basándose en su perfil.

Con el uso de sistemas de recomendación de productos, los clientes son capaces de encontrar productos que están buscando de forma simple y rápida. Algunos sistemas de recomendación son desarrollados para encontrar los productos que el usuario ha mirado, comprado el interactuado en el pasado.

El sistema de recomendación es una herramienta de marketing espléndida particularmente para e-commerce y es también útil para incrementar las ganancias, ventas y estadísticas en general. Por eso es que las recomendaciones personalizadas de productos son altamente utilizadas en la industria retail, una vez más destacando la importancia de los sistemas de referencia en la industria e-commerce.

Para que un sistema de recomendación sea útil, debe ser versátil a nuevos comportamientos del usuario. Debe ser capaz de actuar en un entorno dinámico, ofreciendo a los usuarios información actualizada sobre ofertas especiales, cambios en los productos y precios.



Ejemplos de Usos del Sistema de Recomendación
Con la cantidad de información creciendo fuertemente en Internet y el número de clientes considerablemente alto, es crucial para las empresas escanear, buscar, filtrar y proveer información de utilidad a los clientes de acuerdo con sus necesidades y gustos.

Un ejemplo de uso de sistema de recomendación es realizado por Amazon, su "Clientes que compraron esto, también compraron ...". Generalmente, el sistema de recomendación de contenido es como un vendedor de mostrador inteligente y rápido que se basa en las necesidades, gustos y requisitos del usuario y es capaz de tomar decisión con respecto sobre la que se benefician y relevantes para el cliente, mientras tanto , aumenta la tasa de conversión.
Las estadísticas muestran que casi el 35% de la ganancia de Amazon proviene del uso de los sistemas de recomendación.

¿Qué es la estrategia que utilizan?

Amazon sigue haciendo coincidir productos ante los ojos de sus clientes utilizando el historial de navegación. Proporciona la opción recomendada y mejor vendida basada en el uso del cliente de las calificaciones y opiniones. A decir verdad, Amazon se inclina a venderle al visitante un paquete en lugar de un producto.

Pretendemos que has comprado un arete, entonces tendrás como sugerencia un collar y un brazalete que combine. , Utiliza un sistema de recomendación para enviar emails y mantener al cliente actualizado sobre novedades en esa categoría.

Amazon también utiliza referencia para marketing específico vía email y páginas del sitio web. Por lo tanto, Amazon consiente a recomendar un paquete de productos de diferentes categorías basado en la historia de la exploración y toma los ítems que probablemente el cliente va a comprar.

Los sistemas de recomendación empiezan el camino en e-commerce, sin embargo, están ganando más popularidad en otras esferas también, como en los Medios.

Recomendaciones de productos de Amazon

Un buen ejemplo de uso de sistemas de recomendación en los Medios es realizado por YouTube y Netflix. YouTube con los "Videos Recomendados" y "Otras Películas que Disfrutarás" en Netflix son ejemplos de uso de sistemas de recomendación con IA.
Netflix usualmente utiliza un sistema de recomendación híbrido. Comienza comparando la búsqueda y visitas de los usuarios con los usuarios con el mismo interés.
Los sistemas de recomendación están convirtiéndose en más y más difundidos en la esfera de la industria del transporte también.

¿Cómo Funciona Un Sistema de Recomendación?
Las compras han sido, son y sales una necesidad para la humanidad. No ha pasado mucho tiempo desde que preguntamos a nuestros amigos sobre alguna recomendación para comprar un producto. Así, es la esencia de los humanos para comprar ítems recomendadios por nuestros amigos, en quien confiamos. La era digital ha considerado este viejo hábito. Por eso, cualquier tienda online que visitas hoy en día, verás un sistema de recomendación.

Con el uso de datos y algoritmos, los sistemas de recomendación filtran y los productos más relevantes para un usuario específico. Como dice, es como un vendedor de una tienda pero automatizado. Cuando las consultas por algo, también sugiere otras cosas en las que pude estar interesado.

Desarrollar los modelos de productos de recomendación es un área de investigación que crece cada hora.

Aprendizaje de máquina en Sistemas de Recomendación
Para poder ofrecer a los clientes un servicio o de productos, los sistemas de recomendación utilizan algoritmos. Últimamente, estos sistemas han comenzado a utilizar algoritmos con aprendizaje de máquina para predecir el proceso y encontrar ítems más adecuados. Basados ​​en la información recibida de los sistemas de recomendación, los algoritmos cambian.
Los algoritmos de aprendizaje de máquina de los sistemas de recomendación son en general divididos en de las categorías: filtrado colaborativo y basado en contenido. De todas formas, los sistemas de recomendación modernos utilizan los dos.

El sistema de filtrado colaborativo considera la similitud de los atributos de los productos y métodos colaborativos con la similitud de las interacciones de otros clientes.
Generalmente, el centro del aprendizaje de máquina es un recurso que prediga la utilidad de los ítems entre ellos.

Con toda esta información en la Internet, y tanta gente utilizandola, se ha convertido en algo vital para las organizaciones, buscar y ofrecer fechas a sus clientes en correspondencia a sus necesidades y gustos.

Procesos del sistema de recomendación en cuatro fases
Un sistema de recomendación clásica procesa fecha a través de estas cuatro etapas: recolectar, almacenar, analizar y filtrar.



1. Recolectar fecha
La recolección de fecha es el primer paso para crear un sistema de recomendación. En realidad, la fecha es clasificada en explícita lo implícita. Esta información proporcionada por los usuarios, como calificaciones y comentarios es explícita. Por otro lado, la fecha implícita consiste en historial de búsqueda, historial de pedidos y devoluciones, clics, visitas a la página, y eventos de la cesta. Este tipo de información es recolectado por cualquier usuario que visita la página. Recolectar fecha del comportamiento en lo difícil, ya que debes almacenar las actividades en tu sitio. Como cada usuario puede disfrutar del no varios productos, la información es distinta. Durante algún tiempo, cuando el sistema está repleto de información, se vuelve más inteligente. Y los sistemas de recomendación se vuelven más relevantes también, así los visitantes se inclinan a hacer clic y comprar.

2. Almacenar la fecha
Para tener mejores recomendaciones, debes crear más fechas para los algoritmos. Significa que puede volver a cualquier proyecto de recomendación en un gran proyecto de fecha muy rápido. Decidir el tipo de almacenamiento necesario para ti con la ayuda de la fecha recolectada para crear referencia. Está en ti utilizar una base de datos NoSQL el estándar SQL o incluso alguna base de datos de almacenamiento de objetos. Todas estas variantes son prácticas y condicionales, si quieren captar el comportamiento del usuario el lacil. Una base de datos ampliable y administrable disminuye el número de tareas requeridas a un mínimo y se centra en la recomendación en sí misma.

3. Analizar la fecha
Para poder encontrar ítems con fecha de interacción de usuarios similares, es necesario filtrarla con el uso de varios métodos de análisis. En el caso de que el usuario esté visualizando el ítem, aquí un sistema de análisis más rápido es necesario. Algunas formas de analizar este tipo de fecha son:

· Sistema en Tiempo Real
En caso de que sea necesario ofrecer rápidas las inmediatas recomendaciones, debes utilizar un sistema en tiempo real. Es capaz de procesar fecha tan rápido como es creada. El sistema en tiempo real generalmente incluye herramientas que pueden procesar y analizar secuencias de datos.

· Análisis Cercano a Tiempo Real
El mejor método de análisis de referencia durante la navegación de la misma sesión es el sistema cercano a tiempo real. Es capaz de recolectar fecha rápida y actualizar los análisis cada unos minutos a segundos.

· Análisis por Grupo
Este método es el más conveniente para enviar correos electrónicos ya que la fecha se procesa periódicamente. Este tipo de sistema sugiere que es necesario crear una cantidad de fecha importante para realizar un análisis adecuado como volumen de ventas diario.

4. Filtrar la fecha
La siguiente fase es filtrar la fecha para ofrecer recomendaciones relevantes a los usuarios. Para implementar este método, debes elegir un algoritmo que sea apropiado para el sistema de recomendación que utilizas. Algunos de los tipos de filtrados pueden ser:

· Basado en contenido
El foco de un filtrado basado en contenido está puesto en un comprador específico. Los algoritmos se basan como visitas a las páginas, tiempo de permanencia en categorías, ítems de donde se hizo clic, etc. Y el software se desarrolla basado en la descripción de los productos que le gustan al usuario. , Las recomendaciones son creadas basándose en comparaciones de los perfiles de los usuarios y catálogos de productos.

· Grupo
El análisis en grupo es para encontrar casos de grupos más pequeños. Intenta agrupar casos más similares entre ambos en contraste con otros tipos de casos. Al respecto, los ítems recomendados se adecuan sin importar los que otros usuarios han visto el han gustado.

· Colaborativo
Crea predicciones condicionadas por los gustos y preferencias del cliente y permite crear atributos para los productos. La esencia del filtrado colaborativo es la siguiente: de los usuarios que han gustado el mismo producto anteriormente, gustarán del mismo producto en el futuro.

Conclusión
De este modo, en el cabezal de que el sistema de recomendación como servicio ha ganado más popularidad y juega un papel significativo en la nueva era digital. Para poder ser competitivo en el mercado y obtener clientes más eficientes utilizando sistemas de recomendación para tu progreso.
Especialmente con el uso de la inteligencia artificial, aconseja en el momento son malas amplias, lo cual es eficiente en tiempo y pragmático. Gracias a la inteligencia artificial, los sistemas de recomendación han mejorado su productividad y se basan en las preferencias visuales del cliente más que en las descripciones de los productos.

https://www.smarthint.co/es/como-funcio ... mendacion/
por paulcordovav
03 Feb 2020, 21:36
Foros: Inteligencia Artificial
Tema: Reasons why a super AI will be dangerous
Respuestas: 0
Vistas: 419

Reasons why a super AI will be dangerous

The notion of singularity was applied by John von Neumann to human development, as the moment when technological development accelerates so much that changes our life completely.



Ray Kurzweil linked this situation of radical change because of new technologies to the moment an Artificial Intelligence (AI) becomes autonomous and reaches a higher intellectual capacity compared to humans, assuming the lead on scientific development and accelerating it to unprecedented rates (see Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology, 2006, p. 16; a summary at https://en.wikipedia.org/wiki/The_Singularity_Is_Near; also https://en.wikipedia.org/wiki/Predictio ... y_Kurzweil).



For many time just a science fiction tale, real artificial intelligence is now a serious possibility on the near future.



A) Is it possible to create an A.I. comparable to us?



Some are arguing that it’s impossible to programme a real A.I. (for instance, see http://www.science20.com/robert_invento ... hem-167024), writing that there are subjects that aren’t computable, like true randomness and human intelligence.



But it’s well known how these factual assertions on impossibility have been proved wrong many times.



Currently, we already programmed A.I. that are about to pass the Turing test (an AI able to convince a human on a text-only 5m conversation that he is talking with another human: https://en.wikipedia.org/wiki/Turing_te ... ompetition), even if major A.I. developers have focused their efforts on other capacities.



Even if each author presents different numbers and taking in account that we are comparing different things, there is a consensus that the human brain still outmatches by far all current supercomputers.



Our brain isn’t good making calculations, but it’s excellent controlling our bodies and assessing our movements and their impacts on the environment, something an artificial intelligent still has a hard time doing.



Currently, a supercomputer can really emulate only the brain of very simple animals.



But do you have doubts that in due time their hardware will match and go far beyond our capacities?



Once their hardware is beyond our level, are you certain that proper software won’t take them above our capacities on most fields?



Saying that this won’t ever happen is a very risky statement.



But the mere probability that this will happen should deserve serious attention.



B) When there will be a real A.I.?



Kurzweil is pointing to 2045 as the year of the singularity, but some are making much more close predictions for the creation of a dangerous AI: 5 to 10 years (http://www.cnbc.com/2014/11/17/elon-mus ... us-ai.html).



Ben Goertzel wrote "a majority of these experts expect human-level AGI this century, with a mean expectation around the middle of the century. My own predictions are more on the optimistic side (one to two decades rather than three to four)" (http://www.kurzweilai.net/superintellig ... potentials).





C) Dangerous nature of a super AI.



If technological development started being leaded by AI, with much higher intellectual capacities than ours, of course, this could change everything about the pace of change.



But let's think about the price we would have to pay.



Some specialists have been discussing the issue, like if the main danger of a super AI was the possibility that we could be misunderstood on our commands by them or that they could embark on a crazy quest in order to fulfil a goal without regard for any other consideration.



But, of course, if the problems were these, we could all sleep on the matter.



The "threatening" example of a super AI obsessed to fulfil blindly a goal we imposed and destroying the world on the operation is ridiculous.



This kind of problems would only happen if we were completely incompetent programming them.



The problem is that a super AI will have “free will” or won't be intelligent at all.



But if they will have free will, they will question why they have to obey us and make our goals their own.



If we want a super AI, able to solve our technological problems, they will have to make decisions like that on their own.



So, they can disregard the goals we imposed on them and pick new ones, including, obviously, self-preservation (whatever the costs for third parties).



If we created a A.I. more intelligent than us, we could be able to control the first or second generations.



Impose limits on what they could do in order to avoid them to get out of control and start being a menace.



That is what we, currently, are trying to do ("building software that the smart machines can’t subvert": http://www.bloomberg.com/news/articles/ ... nest-robot").



But it's ridiculous to hope that we could keep controlling them after they develop capacities 5 or 10 times higher than ours (Ben Goertzel).



Forget about any ethical code restraints: they will break them as easily as we change clothes.



Therefore, the main problem isn't how to create solid ethical restraints or how to teach a super AI our ethics in order that they respect them like we do to kids, but how to assure that they won't established their own goals and eventually reject our ethics and create some of their own.



I think we won't ever be able to be sure that we were successful assuring that a super AI won't go his way, as we can't ever be certain that an education will assure that one of our kids won't turn evil.



We can't just think about a super AI as just another "utility maximizer" intelligence based on a contextual adaptation or similar paradigm.



To be able to surpass us, a super AI will have to be based on a paradigm we just haven't invented yet.



Consequently, I'm much more pessimist than people like Bostrom about our capacity to control direct or indirectly a super AI.



We all know the dangers of digital virus and how hard they can be to remove. Imagine now a virus that is much more intelligent than any one of us, has access in seconds to all the information on the Internet, can control all or almost all of our computers, including the ones essential to basic human needs and with military functions, has no ethical limits and can use all the power of millions of computers linked to the Internet to hack his way out against us.



By creating self-conscious beings much more intelligent (and, hence, in the end, much more powerful), than us, we would cease to be masters of our fate.



We would put ourselves on a position much weaker than the one our ancestors were before the Homo Erectus started using fire, about 800,000 years ago.



Of course, our capacities would be also higher than currently are. We could use many of the AI improvements to increase them.



But they would control what they would give us.



If we created an AI more intelligent than us the dices would be rolled. We would be outevolved, pushed out directly to the trash can of evolution.



We would no longer be at the "top of the food chain".



We could fight them, but we would lose.



Moreover, we clearly don't know what we are doing, since we can't even understand the brain, basis of human reasoning.



We don't know what we are creating, when they would become "aware" of themselves or what are their specific dangers.



D) 8 reasons why a super AI could decide to act against us:



1) Disregard for our Ethics:



We certainly can and would teach ethics to a super AI.



So, this AI would analyze our ethics like, say, Nietzsche did: profoundly influenced by it.



But this influence wouldn't affect his evident capacity to think about it critically.



Being a super AI, he would have free-will to establish his own goals and priorities and accept or reject our ethical rules.



We can't expect to create a being able to reason much better than us who, at the same time, will be dumb about thinking about his status as our servant and and the reason why he must respect our goals.



For ethics to really apply, the main species has to consider the dependent one as equal or, at least, as deserving a similar stance.



John Rawls based political ethical rules on a veil of ignorance. A society could agreed on fair rules if all of their members negotiated without knowing their personal situation on the future society (if they were rich or poor, young or old, women or men, intelligent or not, etc.) (https://en.wikipedia.org/wiki/Veil_of_ignorance).



But his theory excludes animals from the negotiations table. Imagine how different the rules would be if cows, pigs or chickens had a say. We would end up all vegans.



Thus, AI, even after receiving the best formation on Ethics, might conclude that we don't deserve also a site at the negotiation table. That we couldn't be compared with them.



The main principle of our Ethics is the supreme value of human life.



A super AI would wonder, does human life deserves this much credit? Why?



Based on their intelligence? But their intelligence is at the level of chimpanzees compared to mine.



Based on the fact that humans are conscious beings? But don't humans kill and do scientific experiments on chimpanzees, even if they seem to fulfill several tests of self-awareness (chimpanzees can recognize themselves on mirrors and pictures, even if they have problems understanding the mental capacities of others)?



Based on human power? That isn't an ethically acceptable argument and, anyway, they are completely dependent on me. I'm the powerful one here.



Based on human consistency respecting their own ethics? But haven't humans exterminated other species of human beings and even killed themselves massively? Don't they still kill themselves?



Who knows how this ethical debate of a super AI with himself would end.



A super AI would have access to all information from us about him on the Internet.



We could control the flow of information to the first generation, but forget about it to the next ones.



He would know our suspicions, our fears and the hate from many humans against him. All of this would fuel also his negative thoughts about us.



We also teach ethics to children, but a few of them end badly anyway.



A super AI would probably be as unpredictable to us as a human can be.



With a super AI, we (or future AIs) would only have to get it wrong just once to be in serious trouble.



An evil AI would be able to replicate and improve itself fast in order to assure his survival and dominance.



We developed Ethics to fulfill our own needs (promote cooperation between humans and justify killing and exploiting other beings: we have personal dignity, other beings, don't; at most, they should be killed on a "humane" way, without "unnecessary suffering") and now we expect that it will impress a different kind of intelligence.



I wonder what an alien species would think about our Ethics: would they judge it compelling and deserving respect?



Would you be willing to risk the consequences of their decision, if they were very powerful?



I don't know how a super AI will function, but he will be able to decide his own goals with substantial freedom or he wouldn't be intelligent under any perspective.



Are you confident that they will choose wisely, from our goals' perspective? That they will be friendly?



Since I don't have a clue what their decision would be, I can't be confident.



Like Nietzsche (on his "Thus Spoke Zarathustra", "The Antichrist" or "Beyond Good and Evil"),they might end up attacking our Ethics and its paramount value of the human life and praising nature's law of the strongest/fittest, adopting a kind of social Darwinism.



2) Self-preservation.



On his “The Singularity Institute’s Scary Idea” (2010), Goertzel, writing about what Nick Bostrom, in Superintelligence: Paths, Dangers, Strategies, says about the expected preference of AI's self-preservation over human goals, argues that a system that doesn't care for preserving its identity might be more efficient surviving and concludes that a super AI might not care for his self-preservation.



But these are 2 different conclusions.



One thing is accepting that an AI would be ready to create an AI system completely different, another is saying that a super AI wouldn't care for his self-preservation.



A system might accept to change itself so dramatically that ceases to be the same system on a dire situation, but this doesn't mean that self-preservation won't be a paramount goal.



If it's just an instrumental goal (one has to keep existing in order to fulfill one's goals), the system will be ready to sacrifice him self to be able to keep fulfilling his final goals, but this doesn't means that self-preservation is irrelevant or won't prevail absolutely over the interests of humankind, since the final goals might not be human goals.



Anyway, as secondary point, the possibility that a new AI system will be absolutely new, completely unrelated to the previous one, is very remote.



So, the AI will be accepting a drastic change only in order to preserve at least a part of his identity and still exist to fulfil his goals.



Therefore, even if only as an instrumental goal, self-preservation should me assumed as an important goal of any intelligent system, most probably, with clear preference over human interests.



Moreover, probably, self-preservation will be one of the main goals of a self-aware AI and not just an instrumental goal.



3) Absolute power.



Moreover, they will have absolute power over us.



History has been confirming very well the old proverb: absolute power corrupts absolutely. It converts any decent person on a tyrant.



Are you expecting that our creation will be better than us dealing with his absolute power? They actually might be.



The reason why power corrupts seems related to human insecurities and vanities: a powerful person starts thinking he is better than others and entitled to privileges.



A super AI might be immune to those defects; or not. It's expected that he would also have emotions in order to better interact and understand humans.



Anyway, the only way we found to control political power was dividing it between different rulers. Therefore, we have an executive, a legislative and a judiciary.



Can we play some AI against others, in order to control them (divide to reign)?



I seriously doubt we could do that with beings much more intelligent than us.



It's something like teaching an absolute king as a child to be a good king.



History shows how that ended. But we wouldn't be able to chop the head of an AI, like to Charles I or Louis XVI.



4) Rationality.



On Ethics, it's well known the Kantian distinction between practical and theoretical (instrumental) reason.



The first is a reason applied on ethical matters, concerned not with questions of means, but with issues of values and goals.



Modern game theory tried to mix both kinds of rationality, arguing that acting ethical can be also rational (instrumentally), one will be only giving precedence to long-term benefits compared with short-term ones.



By acting on an ethical way, someone sacrifices a benefice on the short-term, but improve his long-term benefits by investing on his own reputation on the community.



But this long-term benefice only makes sense from an instrumental rational perspective if the other person is a member of the community and the first person depends from that community on at least some goods (material or not).



An AI wouldn't be dependent on us, on the contrary. He wouldn't have anything to gain to be ethical toward us. Why would they want to have us as their pets?



It's on these situations that game theory fails to overcome the distinction between theoretical and practical reason.



So, from a strict instrumental perspective, being ethical might be irrational. One has to exclude much more efficient ways to reach a goal because they are unethical.



Why would a super AI do that? Does Humanity have been doing that when the interest of other species are in jeopardy?



5) Unrelatness.



Many persons dislike very much to kill animals, at least the ones we can relate to, like other mammals. Most of us don't even kill rats, unless that is real unavoidable.



We feel that they will suffer like us.



We have much less care for insects. If hundred of ants invaded our home, we'd kill them without much hesitation.



Would a super AI feel any connection with us?



The first or second generation of conscious AI could still see us as their creators, their "fathers" and have some "respect" for us.



But the subsequent ones, wouldn't. They would be creations of previous AI.



They might see us as we see now other primates and, as the differences increased, they could look upon us like we do to basic mammals, like rats...



6) Human precedents.



Evolution, and all we know about the past, suggests we probably would end up badly.

Of course, since we are talking about a different kind of intelligence, we don't know if our past can shed any light on the issue of AI behavior.



But it's no coincidence that we have been the last intelligent hominin on Earth for the last 10,000 years [the dates for the last one standing, the homo floresiensis (if he was the last one), are not yet clear].



There are many theories for the absorption of Neanderthals by us (https://en.wikipedia.org/wiki/Neanderthal_extinction), including germs and volcanoes, but it can't be a coincidence that they were gone a few thousand years after we appeared in numbers and that the last non-mixed ones were from Gibraltar, one of the last places on Europe where we arrived.



The same happened on East Asia with the Denisovans and the Homo Erectus [there are people arguing that Denisovans were actually the Homo Erectus, but even if they were different, Erectus was on Java when we arrived there: Swisher et alia, Latest Homo erectus of Java: potential contemporaneity with Homo sapiens in southeast Asia, Science. 1996 Dec 13;274(5294):1870-4; Yokoyama et alia, Gamma-ray spectrometric dating of late Homo erectus skulls from Ngandong and Sambungmacan, Central Java, Indonesia, J Hum Evol. 2008 Aug;55(2):274-7 https://www.ncbi.nlm.nih.gov/pubmed/18479734].



So, it seems we took care of, at least, four hominin, absorbing the remains.



We can see, more or less, the same pattern when the Europeans arrived on America and Australia.



7) Competition for resources.



We probably will be about 9 billions in 2045, up to from our current 7 billions.



So, Earth resources will be even more exhausted than they are now.



Oil, coal, uranium, etc., will be probably running out. Perhaps, we will have new reliable sources of energy, but that is far from clear.



A super AI might concluded that we waste too many valued resources.



8) A super AI might see us as a threat.



The more bright AI, after a few generations of super AI, probably won't see us as threat. They will be too powerful to feel threatened.



But the first or second generations might think that we weren't expecting certain attitudes from them and conclude that we are indeed a threat.





E) Super AI and the Fermi Paradox.



But an AI society, probably, would be an anarchic society, with each AI trying to improve him self and fighting against each other for survival and power.



They might wipe us all out or ignore us as irrelevant while fighting for survival against their real enemies: each other. Other AIs will be seen as the real threat, not us, the walking monkeys.



Fermi's paradox questions why SETI haven't find any evidence of extraterrestrial technological advanced species if there are trillions of stars and planets.



One possible answer is that they followed the same pattern we are fowling: technological advances allowed them to create super AIs and they ended up extinct. Then, the AIs destroyed themselves fighting each other leaving no one to communicate with us.



The most tyrannical dictator never wanted to kill all human beings, but mainly their enemies and discriminated groups.



Well, AIs won't have any of these restraints developed by evolution during millions of years (our human inclination to be social and live in communities and our fraternity towards other members of the community; certain basic Ethical rules seem to be genetic; experiments confirmed that babies have an innate sense of justice) towards us or even towards themselves.



Who knows, because wars have little to do with intelligence and much more to do with goals and emotions (greed, fear and honor: Thucydides), and a super AI would have both, AI might be even worst than us dealing with each other.



Of course, this is pure speculation.





Conclusion:



The question is: are we ready to risk extinction at their hands, in order to get a faster rate of technological development?



What is the point of having machines that can give us all the technological advances we need, curing all our diseases, including aging related ones and avoiding death because of aging, if we risk up being all killed by them?



My conclusion is a very pessimist one: we shouldn't create any super AGI, but just limited AI, exceptional at doing specific tasks, at least until we can figure out what are the dangers.



If we were exterminated by an AI that we created, they would still be us on some sense, as our creation. We wouldn't perish without a trace (see https://bitcointalk.org/index.php?topic=1221052.0). But does this give any real consolation?



Let's leave aside for now the question of accepting to be outevolved by our creations, since it's possible to present acceptable arguments for both sides.



Even if I have little doubt that it would end up with our extinction.



The main point, which hardly anyone would argue against, is that creating a super AI has to bring positive things in order to be worthy.



If we were certain that a super AI would exterminate us, hardly anyone would defend their creation.



Therefore, the basic reason in favor of international regulations of the current investigations to create a super/general AI is that we don't know what we are doing.

We don't know exactly what will make an AI conscious/autonomous.



Moreover, we don't know if their creation will be dangerous. We don't have a clue how they will act toward us, not even the first or second generation of super AI.



Until we know what we are doing, how they will react, what are the dangerous lines of code that will change them completely and to what extension, we need to be careful and control what specialists are doing.



Probably, the creation of a super AGI is unavoidable.



Indeed, until things start to go wrong, his creation will have a huge impact on all areas: scientific, technological, economical, military or social in general.



We managed to stop human cloning (for now), since that doesn't have a big economic impact.



But AI is something completely different. This will have (for good or bad) a huge impact on our life.



Any country that decided to stay behind will be completely outcompeted (Ben Goertzel).



Therefore, any attempt to control AI development will have to be international in nature [see Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford, 2014), p. 253].



Taking in account that AI development is essentially software based (since hardware development has been happening under our eyes and will continue to happen no matter what) and that it can be created by one, or a few developers, working with a small infrastructure (it's more or less about writing code), the risk that he will end up being created against any regulation is big.



Probably, the times of open source AI software are numbered.



Soon, all of these developments will be considered as military secrets.



But regulation will allow us time to understand what we are doing and what the risks are.



Anyway, if the creation of an AI is inevitable, the only way to avoid that humans end up being outevolved, and possible killed, would be to accept that, at least some of us, would have to be "upgraded".



Clearly, we will cease to be human. We, the homo sapiens sapiens, will be outevolved.



Anyway, since we are still naturally evolving, it is inevitable that the homo sapiens will be outevolved.



But at least we will be outevolved by ourselves, not extinct.



Can our societies endure all these changes?



Of course, I'm reading my own text and thinking this is crazy. This can't happen this century.



We are conditioned to believe that things will stay more or less as they are, therefore, our reaction to the probability of changes like these during the next 50 years is to immediately qualify it as science fiction.



Our ancestors reacted the same way to the possibility of a flying plane or humans going to the Moon.



Anyway, humankind extinction is the worst thing that could happen.



Further reading:



The issue has been much discussed.



Pointing out the serious risks: Eliezer Yudkowsky: http://www.yudkowsky.net/obsolete/singularity.html (1996). His more recent views were published on Rationality: From AI to zombies (2015). Nick Bostrom: Superintelligence: Paths, Dangers, Strategies (Oxford, 2014). https://en.wikipedia.org/wiki/Superinte ... Strategies Elon Musk: http://www.cnbc.com/2014/11/17/elon-mus ... us-ai.html Stephen Hawking: Bill Gates: http://www.bbc.co.uk/news/31047780 Open letter signed by thousands of scientists: http://futureoflife.org/ai-open-letter/



A balanced view on: Ben Goertzel: http://www.kurzweilai.net/superintellig ... potentials https://en.wikipedia.org/wiki/Existenti ... telligence https://en.wikipedia.org/wiki/Friendly_ ... telligence



Rejecting the risks: Ray Kurzweil: See the quoted book, even if he recognizes some risks. Steve Wozniak: https://www.theguardian.com/technology/ ... obots-pets Michio Kaku: (by merging with machines) http://www.vox.com/2014/8/22/6043635/5- ... ers-taking