TEDx Talk

If you didn’t get a chance to see my TED talk live, the video has just been produced and uploaded onto the TEDx channel on Youtube (below).

The talk is about some of my work in artificial intelligence: specifically the results we’ve observed in our research in synthetic neurointerfaces. Our goal was to functionally and synthetically model the human neocortical columns in an artificial intelligence to give a more differentiable insight into the cognitive behaviors we, as humans, exhibit on a daily basis.

If you would like to know more, I have published the working paper here.

Please let me know what you all think in the comments section below or on Youtube, I would love all the feedback I can get!

Advertisements

Fluid Intelligence: Introduction

 

Fluid intelligence: the capacity to think logically and solve problems in novel situations, independent of acquired knowledge

Psychology has found the basis of fluid intelligence in the juxtaposition of layered memory and application as means to essentially “connect two fluid ideas with an an abstractly analogous property”. Such a mathematical design would have to be able to therefore derive temporal relationships with weighted bonds between two coherently disparate concepts through the means of similar properties. These properties within node types will have to be self-defined and self-propagated within idea types.

Why?

In a pursuit towards a truly dynamic artificial intelligence, it is necessary to establish a recurrent method to decipher the presence of concrete yet abstract entities (“ideas”) independent of a related and coherent topic set.
A considerable amount of work venturing into this field has culminated in the prevalence of statistical methods to extract probabilistic models dependent on large amounts of unstructured data. These Bayesian data analytic techniques often result in an understanding superficial in the context of a true relational understanding. Furthermore, this “bag-of-words” approach when looking at amounts of unstructured data (quantifiable by correct relationships derived between the idea nodes) often relate to a single dimensional understanding of the topics at hand. Traditionally, when these topics are transformed, it is difficult to extract hierarchy and queryable relations using matrix transformations from a derived data set.

The project that I will be describing in the subsequent posts is an effort to change the approach from which dynamic fluid intelligence is derived, finding a backbone in streaming big data. Ideally, this model would be able to take a layered, multi-dimensional approach to autonomous identification of properties of dynamically changing ideas from portions of said data set. It would also be able to find types of relationships, ultimately deriving a set of previously undefined relational schemas through unsupervised machine learning techniques that would ultimately allow for a queryable graph with properties and nodes initially undefined.

Big Data Coorelation: Purpose

Question

About 1.8 zettabytes (1.8 trillion gigabytes) of data is being created every year. In all this data there are answers to problems we have been wondering about for ages. It’s just how you can process the information most efficiently and derive correlations from the complexity of the data on the internet. You may not be able to prove anything scientifically, but you may be able to prove hypotheses statistically with huge amounts of data which is hidden somewhere in this intimidating data set. So is it possible to mine hidden information from these huge scales? Can one use existing technologies such as Apache Hadoop, Nutch, Map Reduce, and Google API to develop an engine that can derive comprehendible correlational data autonomously and efficiently?

Purpose

With all this data being produced every year, finding a radical and innovative way of processing large and complex data sets is a need that is unfulfilled. For any computer, processing unstructured data is a very arduous and long process (all the internet’s data is unstructured). This exercise of an engine implementation is an attempt at combining multiple high-end technologies to work in unison to crutch and sift through large and complex data sets to Read More »

In the Comparison of Genetic Operators For Solving the Traveling Salesman Problem: Selection

In comparing selection methods, for the sake of comparison it was in our best interest to leave the least to randomness except in the selection method. The mutation method was the center inverse mutation throughout all the trials and a center mutation point was chosen every time. The cutoff percentage was the same (30%) for each trial and the number of generations was a fixed 5000.

The numbers displayed below are the average of 10 trials conducted with the same input graph but a different initial population for each trial.Selection Comparison

Genetic Algorithm: Selection

In every generation, a selection agent comes to play which sifts out the fit chromosomes from the unfit chromosomes. The selection agent “kills off” a user specified percentage of organisms in the population.However, it is under the discretion of the selection agent in determining which chromosomes to kill. As mentioned earlier, fitness is defined by having the lowest weight in the circumstances put forth by the TSP. However selection may not necessarily be only off of that. This can be seen when comparing the two most prevalent types of selection operators:
Read More »

Genetic Algorithm: Mutation

During the progression of a genetic algorithm, the population can hit a local optima (or extrema). Nature copes for this local optima by adding random genetic diversity to the population set “every-so-often” with the help of mutation. Our genetic algorithm accomplishes this via the mutation operator. Although there are a plethora of mutation types our GA focused on a select two:

1. Reverse Sequence Mutation – In the reverse sequence mutation operator, we take a random point in the sequence or organism. We split the path (P1) at the selected point. The second half of the split path (P1H2) is then inverted and appended to the end of the first half (P1H1) with the necessary corrections made to make sure the last node is the same as the start node to get a final mutated path (M1).

P1: {A, C, J | D, G, H, E, B, F, I, A}  ⇒ M1: {A, C, J, I, F, B, E, H, G, D, A}

2. Center Inverse Mutation – The chromosome (path or organism) is divided into two sections at the middle of the chromosome. Each of these sections are then inverted and added to a new path. The order of each of these halves remains constant, meaning the first inverted half remains the first half in the mutated path. The necessary corrections are made to amend the mutated path into a viable path so solve the TSP.

P1: {A, C, J, D, G, H, E | B, F, I, A}  ⇒ M1: {A, E, H, G, D, J, C, I, F, B, A}

Genetic Algorithms: Crossover

The method of crossover remains fairly constant regardless of the problem and scope. Crossover is achieved by first selecting a crossover point within a pair of defined and unique organisms P1 and P2 (which are the equivalent of parents for the crossed over parent). The chromosomes are then split at the selected crossover point. The second half of P2 (P2H2)  is then appended to the first half of P`1  (P1H1) to make one child chromosome (C1). The second child (C2) is made by appending the second half of P1 (P1H2) to the first half of P2 (P2H1).Read More »

Genetic Algorithm Definitions for TSP

A genetic algorithm is a type of evolutionary algorithm and therefore TSP must be fit to fill all the constraints necessary to execute a genetic algorithm. An organism in the sense of TSP can be defined as a viable path that visits every node in the graph. Each path must start with a node, visit all the nodes present in the graph, and then return to the same node that it started with. An example of a viable path with an input graph of 10 vertices is shown below with each letter representing a node in the input graph:

{A, C, J, D, G, H, E, B, F, I, A}

The population in TSP can be defined as a set of unique paths. Fitness can be defined as the weight or distance of the path. Thus, a lower weight will result in higher fitness and vice versa. A sample population of two organisms is shown below. In front of each organism is its weight or for the cases of this exercise— its fitness:

set{ 143 : {A, C, J, D, G, H, E, B, F, I, A} , 210 : {A , B, J, D, C, E, I, F, H, G, A} }

Genetic Algorithms: Intro

In this exercise, we attempt to utilize genetic algorithms to find an optimal, but not perfect, solution to the traveling salesman problem. A genetic algorithm emulates nature in its optimization process. Nature uses several mechanisms which have led to the emergence of new species and still better adapted to their environments. The laws which react to species evolution have been known by the research of Charles Darwin in the last century: Genetic algorithms are powerful methods of optimization that utilize these rules defined by evolution in their process to find a pseudo-optimal answer. These algorithms were modeled on the evolution of species. The genetic algorithm utilizes the properties of genetics such as selection, crossover, mutation.

Introduction to the Traveling Salesman Problem

The Problem
The traveling salesman problem (TSP) is a typical example of a very hard combinatorial optimization problem. The problem is to find the shortest tour that passes through each vertex in a given graph exactly once. The TSP problem is classified as an NP-complete problem. There are some intuitive methods to find the approximate solutions, but all of these methods have exponential complexity, they take too much computing time or require too much memory. Mathematically TSP can be expressed as:
min [f(T),T = (T[1],T[2],T[3],… …,T[n])]
Read More »