Communication in modern computing – Mario Alemi, Lead Scientist GetJenny


The keyword in “Artificial Neural Network” (ANN) is network.  ANN were introduced in the 1940’s, with the idea that simple mathematical functions, called perceptors at the time, when interconnected could do what computers are now starting to do –generalize concepts. If you think about the history of computing, it was not right moment for thinking about “distributed computing”. The von Neumann architecture (the one we have been using till now, CPU (Central Processor Unit), RAM etc) is a serial model of computing. Which means that networks of small computing functions, the perceptors, are hard to implement: given a number of inputs to a set of perceptors, the machine will evaluate one preceptor, then another one, then compute how each of these perceptors influence other perceptors, evaluate these, how these influence the others, etcetera.

But the pioneers of computing networks had a point (of course).  Think of an ant colony. A single ant is not smart, but thousands of them are –only if they can communicate. Communication –what networks needs to be successful, what networks actually are– is also why networks of perceptors were hardly the right solution at the dawn of computer science. At the time, having small computing units communicating between each other was out of question. Much better to create a single “decently smart ant”, the CPU, and leave communication for the future development. Communication remains, nonetheless, the key. If you want to create something as good as our brain you need billions of computing elements communicating among each other.

That is now possible thanks to GPUs (Graphics Processor Units). GPUs, opposite to the CPU, take grids of numbers and perform computations on all of them at the same time. Imagine having two grids of 1,000 x 1,000 elements, and wanting to multiply each element of one grid by the corresponding element of the second (the so-called Hadamard product). This makes 1 million operation. Well, GPUs can glob the 2 million elements and perform all the multiplications at the same time.

Why has this been a breakthrough for ANN? Because the mathematical representation of networks are grids, which scientists call matrices. The representation is quite simple:  the n-th number of the m-th row indicates the weight of the connection between the n-th element and the m-th element in the network. A “0” means no connection, a high number means the receiving elements “feels” strongly what the sender sends.

This is how, simplifying a lot, neural networks work. The inputs are sent to a set of perceptors in the networks –which are now more complex than the ones introduced in the 1950s and are aptly called neurons. These neurons apply a weight on each of the input, effectively creating a connection with all the input channel. The output is “massaged” and sent to a new set of neurons. These do the same (weight, number, massage) and send the results to a new set and so on, until we find a small set of neurons each of which represent a certain category of the inputs. An example: take the pixel of a picture as input, and for all picture of trees have the same neuron in the last set giving a big signal, and all the others a small one.

The whole point is finding the values of the numbers weighing the connections so that the networks always categorize in the correct way. Which, in a mathematical lingo, means “find the right topology of the network”. Or “the way the elements are connected”.

In the next post we’ll see how nature and computers found that certain topologies are favoured –independently if the network is a network of mathematical functions (the perceptors/neurons) or of biological neurons, the way the elements connect is similar…

Historical Readings

(1945) John von Neumann, “The First Draft Report on the EDVAC”

(1947) Walter Pitts and Warren S. McCulloch, “How we know universals: the perception of auditory and visual forms,” Bulletin of Mathematical Biophysics 9:127-147.

(1958) F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain,” Psychological Review, 65:386-408.


Chatbot for Slush: 68 percent automation on day one


GetJenny is setting up Slush’s official chatbot. The first day after going live was promising, as based on early data the chatbot was able to handle 68% of incoming messages on its own.

Slush began with a few hundred attendees back in 2008, but for its 2017 Helsinki issue expects more than 17,000 people. Such a major event isn’t easy to run, especially not on the support side, which is where automation comes in.

In 2016, Slush received roughly a thousand requests over two days through live chat. The most commonly asked questions included “Where can I get my badge” and “How do I get to the venue” — simple enough, but once they occur in the hundreds, a team is quickly pushed to their limits.

Automate recurring questions, optimize, and keep staffing efforts low

These requests are generally similar, which means that there’s the option to handle them automatically, namely with a chatbot, a program that can have a conversation with anyone requesting help. Either the program handles all of the conversation itself, or it refers the person it is talking to to a human service agent. The latter is done if the question is recognized as too complex for the bot to handle.

GetJenny’s approach makes it possible to integrate its conversational engine, called Jenny, with a variety of different systems. In the case of Slush, this is Ninchat, a nimble Finnish chat platform.

Once set up, Jenny is then taught a large number of so-called states, particular situations the chatbot acts on depending on the input it receives, for example providing information on how to get from Helsinki airport to the venue, or retrieving up-to-date agenda information from Slush’s database through its API.


Teaching Jenny: 68 percent automation on day one

The starting point for teaching the chatbot was an analysis of the previous year’s live chat data. Through added testing and training the chatbot with hundreds of volunteers, the resulting learning curve will help the guys at GetJenny perfect their approach for this specific use of the bot, and make it possible to create a response that is both immediately useful and close enough to human interaction.

On the first day of operation, the Slush bot achieved 68 percent automation, which means that the bot was able to independently respond to more than two thirds of all the requests for information it received in its testing phase.

The chatbot won’t depend on opening hours or hired staff to man workstations. It can handle a multitude of last year’s conversations, while leaving only the most complicated matters up to agents to settle. At night, while the support agents are enjoying their time off, the chatbot keeps on answering questions and logging any question it has not been able to answer, making it a breeze to teach it about more topics the next day.

Ready for action and 17,500 attendees

Setting up the technical base for Jenny’s Slush incarnation took the team some two days, which included integrating the solution to Ninchat’s live chat platform. Training the conversational engine and feeding it with all the states and queues it will be expected to act on naturally also requires a few days, but is a straightforward process than can easily be adapted to a variety of requirements.

The team at GetJenny expect their chatbot for Slush, also called Jenny, to do well. A day before Slush kicks off, they’re busy making sure it has a broad enough answer base. Once that is given, Jenny is ready to face 17,500 attendees, 2,300 start-ups, over a thousand investors, and scores of journalists!

You can read more about the partners from Slush blog post

How a multinational insurance company automates over 60% of their customer service queries?


How a multinational insurance company automates over 60% of their customer service queries with the GetJenny chatbot: A case study.

The successful cooperation between the leading Nordic indemnity insurance company IF and GetJenny began in the end of 2016, when IF was looking for an agile and ambitious partner for their chatbot project.

With 3.7million clients in Finland, Sweden, Norway, and Denmark, IF realized that they needed an automated virtual assistant to deal with the increasing number of repetitive customer service enquiries, in order to free up customer service agents from mundane queries, and use their skills better by serving customers with more complex enquiries, ensuring proactive sales lead development, and increasing sales to new customers.

Through research in their existing customer service infrastructure and customer satisfaction, IF concluded that their customers and potential customers actively look for information, and above all they want answers 24/7.

IF already provided a live chat with agents for the existing customers through their online accounts. Now it was time to scale up the customer service experience and provide a chat service on the open web pages without hiring additional staff.


Chatbot Emma was launched in collaboration with GetJenny and Giosg in March 2017 and the initial plan was to test its operational abilities for two hours, with the target of handling 10-20% of customer enquiries. Those hours proved that Emma was an invaluable customer service team member and has been live ever since.

When Emma was first launched, it was taught to answer questions to 50 frequently asked topics. Within 6 months, Emma was handling over 60% of all IF’s customer service enquiries spanning over 250 topics.

It’s all about continuous learning

Emma is designed to be an active team member of the IF customer service team. Like everyone else in the team, Emma is constantly trained to better service the expanding customer base. Emma has a simple to use user interface enabling the IF chat team to teach Emma new topics, as well as to analyze performance, and adapt the existing query responses to better fit the customer needs on a daily basis.

The customer feedback has been encouraging. While Emma keeps on learning, the customer satisfaction keeps on improving. Above all, the customers really appreciate that chatting with Emma is easy, fast, and with no queues.

The IF customer service team is now able to focus on providing a superior customer service experience to the existing clients with complex insurance needs, as well as to actively engage with new potential customers leading to increased sales.
“Using the GetJenny chatbot solution, which we named Emma, we have been able to automate over 60% of repetitive customer service questions. Using the GetJenny Web User Interface our agents are able to easily update and maintain the chatbot as needed.

Working with GetJenny is flexible and easy. I am really inspired by their passion and skills. We are developing the product even further together and it is a learning curve that is really paying off. I think this kind of commitment and collaboration wouldn’t have been possible with a big IT-company. Our whole team really enjoys working with GetJenny!” says Asko Mustonen, Development Manager, IF.

The future

Emma’s success as an extremely effective customer service team member encouraged IF to get her a brother, to better serve the existing and potential B2B clients. GetJenny created Alvar together with IF, who is now live with the B2B customer service team and is increasingly playing a bigger role in the overall B2B customer service experience.

You can also get the full version of the case research from here

5 tips for how to succeed in a chatbot project

5 tips for how to succeed in a chatbot project

The successful cooperation between Insurance Company IF and GetJenny started in the end of 2016 when IF was looking for an agile and ambitious partner for their chatbot project. IF needed the chatbot to fill the consumers wishes, not only those who already are customers of IF but the potential ones as well.

Asko Mustonen, Development Manager from IF was in charge of the project. Together with GetJenny, IF customer service and marketing teams they created Emma – a chatbot that is live 24/7 answering over 250 topics and developing weekly to help customers even further with their problems. We created this chatbot through our partners Giosg’s platform.

Plan, measure and continuously develop

Many companies are currently exploring should they invest on a chatbot. What are the pros and cons? Can a chatbot really be a beneficial part of a customer service team? Does every website have to have it just because of the hype?

Asko listed 5 tips for his colleagues around who are wondering would a chatbot be the perfect answer for taking their customer service to the next level.

  1. Be brave and think outside the box

Gather a group from different units, sit down together and analyze what would be the role of the chatbot and how do you see it affecting to your business. What are the goals for the project and how you are going to measure it? Be open to new ideas and start testing as soon as you can.

  1. Human vs. Chatbot

Be honest. Are you solving a problem that chatbot couldn’t handle but a human would? Just because chatbots are cool it doesn’t mean you should necessarily have one. Sometimes an update on the website would be the best solution.

  1. Teaching a chatbot is an everyday task

Chatbot is a team member like everyone else, it needs attention and education like we all do.  Luckily working with a chatbot is sometimes easier than with us humans. You don’t have to be technical to do it and it learns without questioning your opinions.

  1. Count the pros and cons in euros

Do the math. A chatbot project needs resources but if the project is planned carefully there will be time when chatbot is saving time from the customer service team, handling the most basic questions and allowing the team to focus on the more complex cases. Chatbot allows 24/7 customer service without adding the headcount.

  1. Find the right partners

To achieve the best results you need to partner with skilled people with great passion. Collaboration with partners that really want to understand your business and commit to the project is key to a successful chatbot project.

Got interested? Would it be the perfect time for us to meet? Book a demo here

What we did and how we did it.



Some things to consider before committing to an enterprise bot

Some things to consider before committing to an enterprise bot

Bot development costs range from under a thousand to above 100 000 USD – according to the experience of Alexander Gamanyuk over at Botmakers.

The price is determined obviously by how much work it takes to implement a bot (and how much stuff bot developers can ‘re-use’ for newer bots), but it’s still obvious that the price goes up for enterprise in-house use cases as opposed to developing a consumer facing bot.

Enterprise deployment becomes very expensive because of the nature of virtual agents: they have to learn from an existing corpus that’s relevant to their purpose and then handle information – which in this case is very sensitive, so everything either has to stay in house or a company has to pass some serious red tape and security screenings to be trustworthy.

This is also the reason why the bot development industry is adopting a licensing model rather than the SaaS approach.

Companies who are looking for in-house bot projects should then do their research not only on benchmarks and capabilities of the technology, but what’s included in the license. If you have to change something, does that come with a huge price tag, or can you modify it yourself?

This can make or break a virtual agent implementation, as changes from other parts of the company can affect it. The less flexible it is, the more you will have to pay down the line.

Alternatively, there’s always the choice of trying to do the work in-house if you have the development power to do so. We’ve talked with many companies who just tried to implement the trendiest new technology and ending up missing the mark.

Whatever the Market Leader or Company With Most PR is using might NOT a suitable solution for your needs, the same way as you wouldn’t try to fix the bolts on your door with a jackhammer.

Luckily enough there’s now great open source tools that you can set up easily. Alternatively, once you have an idea for your use case, simply build an MVP with one of the existing bot frameworks and test it with your would be users.

It’s a small effort to make compared to wasting five-six figures on building a virtual agent solution that your colleagues or customers would hate.

Building a simple FAQ bot with Starchat

Building a simple FAQ bot with Starchat

For small companies who are just dipping their toes into providing online support, you may have noticed that despite your best efforts at providing your customers with information, they come to your chat asking quite common questions.

Today we’re going to show you how to help your support staff from ripping their hair out, by building a simple bot with Starchat that can serve as a first-line of support for the most common questions.

(An example of such a bot can be seen here on our website.)

After you’ve set up Starchat with Docker, here’s the brief explanation on how it works, and what can you do with it:

NLP processing

NLP processing is of course the core of any bot. Starchat has two primary ways of triggering states: through queries and analyzers.


If the analyzer field is empty, StarChat will query Elasticsearch for the state containing the most similar sentence in the field queries. We have carefully configured Elasticsearch in order to provide good answers (e.g. boosting results where the same words appear etc), and the results are promising. But you are encouraged to use the analyzer field, documented below.


Through the analyzers, you can easily leverage on various NLP algorithms included in StarChat, together with NLP capabilities of Elasticsearch. You can also combine the result of those algorithms. The best way is to look at the simple example included in the CSV provided in the doc/ directory for the state forgot_password:


The expression and and or are called the operators, while keyword is an atom.

Expressions: Atoms

Presently, the keyword(“reset”) in the example provides a very simple score: occurrence of the word reset in the user’s query divided by the total number of words. If evaluated again the sentence “Want to reset my password”, keyword(“reset”) will currently return 0.2. NB.

These are currently the expressions you can use to evaluate the correctness of a query (see DefaultFactoryAtomic and StarchatFactoryAtomic ):

keyword(“word”): as explained above, normalized
regex: evaluate a regular expression, not normalized
search(state_name): takes a state name as argument, queries elastic search and returns the score of the most similar query in the field queries of the argument’s state. In other words, it does what it would do without any analyzer, only with a normalized score -e.g. search(“lost_password_state”)
synonym(“word”): gives a normalized cosine distance between the argument and the closest word in the user’s sentence. We use word2vec, to have an idea of two words distance you can use this word2vec demo by Turku University
similar(“a whole sentence”): gives a normalized cosine distance between the argument and the closest word in the user’s sentence (word2vec)
similarState(state_name): same as above, but for the sentences in the field “queries” of the state in the argument.

Expressions: Operators

Operators evaluate the output of one or more expression and return a value. Currently, the following operators are implemented (the the source code):

boolean or: calls matches of all the expressions it contains and returns true or false. It can be called using bor
boolean and: as above, it’s called with band
boolean not: as above, bnot
conjunction: if the evaluation of the expressions it contains is normalized, and they can be seen as probabilities of them being true, this is the probability that all the expressions are all true (P(A)*P(B))
disjunction: as above, the probability that at least one is true (1-(1-P(A))*(1-P(B)))
max: takes the max score of returned by the expression arguments

Technical corner: expressions

Expressions, like keywords in the example, are called atoms, and have the following methods/members:

def evaluate(query: String): Double: produce a score. It might be normalized to 1 or not (set val isEvaluateNormalized: Boolean accordingly)
val match_threshold This is the threshold above which the expression is considered true when matches is called. NB The default value is 0.0, which is normally not ideal.
def matches(query: String): Boolean: calles evaluate and check agains the threshold…
val rx: the name of the atom, as it should be used in the analyzer field.

Configuration of the answer recommender (Knowledge Base)

Through the /knowledgebase endpoint you can add, edit and remove pairs of question and answers used by StarChat to recommend possible answers when a question arrives.

Documents containing Q&A must be structured like that:

 "id": "0", // id of the pair
 "conversation": "id:1000", // id of the conversation. This can be useful to external services
 "index_in_conversation": 1, // when the pair appears inside the conversation, as above
 "question": "thank you", // The question to be matched
 "answer": "you are welcome!", // The answer to be recommended
 "question_scored_terms": [ // A list of keyword and score. You can use your own keyword extractor or our Manaus (see later)
 "verified": true, // A variable used in some call centers
 "topics": "t1 t2", // Eventual topics to be associated
 "doctype": "normal",
 "state": "",
 "status": 0

See POST /knowledgebase for an example with curl. Other calls (GET, DELETE, PUT) are used to get the state, delete it or update it.

Testing the knowledge base

Just run the example in POST /knowledgebase_search.

And voila! By configuring your bot with your existing knowledge base and beefing it up with your chat logs of most common conversations, you should have a functional first line of help.

All you have to do is to connect it to the chat system of your choice and configure when you want the bot to handle the conversation.

The advantage of “Agent side bots”

The advantage of “Agent side bots”

There’s a less frequently talked about aspect of what advanced natural language processing technology allows us to do that will have a big impact on our daily lives.

This is what we’re calling “Agent side bots”, or technology that’s deployed between different pieces of software that helps the human with their daily work.

The need for these sort of products comes from a few different things. Namely:

1) Fully automated chat bots are… not really delivering on the initial promise. Low response rates, user creep-outs, bad experiences have scarred many customers, and in return are scaring away companies from deploying them.
2) There’s simply no way to beat human decision making when it comes to human to human interactions.
3) Software that helps the user, especially in business environments, is always a winning strategy.

If you think of the evolution of modern business interfaces, it’s all about giving us more and flexible control over what we want to do and how we can do it. Search engines changed the world for a good reason, to the extent that much of our daily lives consist of searching for things on the internet (and our work: searching for things on the intranet).

But let’s break down a typical process in a business environment:

Alice asks Bob a question (through chat or e-mail or the phone),
Bob looks up the information for Alice in the system only he has access to,
Or Bob tells Alice the answer based on Bob’s knowledge of the subject.

We’ve seen a lot of attempts at trying to replace Bob in this transaction, either by turning Bob into a wikipedia or a fully automatic agent.

But typically, Alice’s question might be unique only from Alice’s perspective, while Bob has to handle the same question 10-100 times a day!

And because we have logs of these interactions, and advanced enough technology, we can build Agent Side Bots to help Bob. Add a layer to the software that they both use, and the agent side bob can interpret Alice’s question, and recommend Bob the answer or the action – based on previous cases.

The task of interpreting, searching and replying becomes as easy as clicking a button – basically accepting the suggestion made by the Agent Side Bot. You can also design these systems in a way that Bob can customize the answer: but you significantly cut down the time it takes to handle such a task.

Moreover, the personal assistant gets better and better with every conversation: making better suggestions over a greater variety of topics.

One early and useful example of such technology was Google’s smart replies: but thankfully, now we can build these kind of solutions ourselves, tailored to our own products.

Businesses globally are helping their workers with technology on a varying degree, because state of the art technology has been historically expensive to implement and difficult to customize: so in some cases, the Bobs and Alices of the world are using Notepad or approved templates or literal sticky notes to help them with these sort of routine communication tasks.

We hope that by the proliferation of agent side bots, their lives will be much easier in the future.

Introducing starchat, an open source, scalable conversational engine for B2B applications

Introducing starchat, an open source, scalable conversational engine for B2B applications

We’ve long been saying that the hype around chatbots might die down one day. People are discovering that not everything becomes instantly better if you wrap it under a conversational UI – some things are better handled with buttons.

However, apart from fun experiments, conversational engines are finding their roles in our increasingly automated society.

It’s still a long road ahead though – after all, if even humans have trouble understanding language sometimes, then it would be very optimistic to demand the same capability from our machines. With that being said, we believe in openness, and that if we want to get to that future where our machines can reasonably understand human commands, allowing them to learn processes the same way as we teach human workers, then we need to build that future together.

The problem is, right now if you want to develop your own chatbot, you have to rely on a closed source NLP engine, either provided by Google, IBM or Facebook. Free-tiers aside, this puts conversational agent companies between a rock and a hard place, as terms can change any time.

That’s why the core of our technology, starchat, is open source. We’re welcoming all developers who are interested in tinkering, experimenting or improving the conversational engine, and to find use cases for it that we haven’t even dreamed of.

At the present, we use it to power customer support roles – by training the system on existing support cases, it can handle a solid portion of customer chats on it’s own, depending on the data quality. The proof is in the pudding, and if our clients are confident enough to trust the technology – because it delivers results – we think you should too.

It’s also very easy to train bots with – and to demonstrate, we have built a FAQ conversational bot about our own business, that you can play around with here.

We’ll be showing off more of the technology in the following days – how we built the bot for example, and how you can do the same with it.

In the meantime, you can get started with starchat here, hosted on GitHub.

LivePerson partners with Jenny for the Live Engage Bot platform

LivePerson partners with Jenny for the Live Engage Bot platform

We are proud to announce to be a part of the select few AI companies helping LivePerson, the leading provider of cloud mobile and online business messaging solutions to run the world’s first enterprise level bot management platform.

LiveEngage for bots allows large brands to deploy, manage, and measure bots they build on their own on LivePerson’s open framework — as well as bots from third parties — to provide customer care and sales assistance to consumers.

Instead of “set and forget” bots that run unsupervised, it serves as an additional layer of analytics and intelligence into AI, which helps businesses get a better understanding of the the effectiveness of bots in their customer care operations.

Through LiveEngage’s open framework, businesses can build their own bots, or bring in bots from a third-party developer, to be managed on the LiveEngage for Bots platform. To bring this platform to life, LivePerson has partnered with a number of bot and AI providers, including IBM Watson, which is already running at large banks and telcos. Other LiveEngage for Bots partners include a number of start-ups doing innovative work in the bots space, including NextIT, Robotify, Bot Central, Get Jenny and Chatfuel, whose bot, running on the LiveEngage for Bots platform, was recently showcased at Facebook’s F8 conference.

We are looking forward to be a part of this initiative, to help businesses bring their customer service to the next level and give consumers the help they deserve. Our professional tools and open-source technology will be deployed on the busiest intersections of communication, helping businesses scale their operations with increasing demand for live engagement.

On hype driven machine learning

On hype driven machine learning

(Original image from the amazing Shivon Zilis)

This is the competitive landscape for machine learning as of now. Countless posts have been written lately on who and what you need to follow in order to navigate this landscape, and rightly so. It’s already enormous, and with any industry that’s just awakening, the boundaries are not yet clear and everything is really up in the air.

In short, if you were to want to build a chat application for your business or for fun, and looked at this chart, you would be confused on where to start.

And no wonder: the latest buzzwords are AI and machine learning, startups all across the globe are getting in on the game, plenty who are just tacking the words on in hopes of quick funding – not without merit, as we’ve seen that investors are swarming in and some big exits have already been done. It’s a gold rush.

And it’s hectic.

Now, coming down to the fact of the matter, for a working conversational agent (or chat interface, or bot, or human-like automation) you need three things:

– A good language model
– Data to train the agent on
– Connector functions, eg. what systems (chat and otherwise) the agent will interact with.

Let’s focus on the first two, as the third one is more or less covered by the market adequately at this point.

The talk of the town is Neural Networks – and rightly so. The technology which has been around since the ’70s is now very accessible. Computing power is now cheap enough to build large neural nets, and companies are fitting this need very well. Nvidia’s pivot comes to mind as one of the most successful ones.

However, our previous experience shows that “purely Neural Network” approach, which analyses only the usually scarce amount of sentences produced by even big corporations (counting far below the billion of words), leads to extremely poor results.

Neural networks require a lot of representative data, something that most companies don’t have lying around. You can use generative neural nets to create simple question – answering bots, but the more specific the task becomes, the more this approach falls apart. Sure, anybody can download a big data set and train a TensorFlow model following a recipe to answer questions, but such a thing is useless if you want to have your bot to take specific actions – making a payment, placing an order, looking up customer info, etc.

To make a paragon in real life, it is like asking a child to learn Finnish and how to answer to customer service questions after just having listened to the few thousands of sentences in a customer service log –sentences which are most of times relatively similar. Computers are good at analyzing huge amount of data (in the exabyte range), and the size of logs recording past conversations in customer service are usually a factor one million below that.

Like all hard problems, it comes down not to the tools you use, but how you approach the problem. We believe that it pays off not to do hype driven development, but to go with what works. For a multilingual environment with limited amount of data available, you need to use multiple different tools and not just follow the trend, and stick with what works.

We are confident that our conclusions are correct, based on years of experience at the forefront of machine learning research:

Mario Alemi was associate scientist at CERN developing algorithm for LHCb for five years, professor of Data Analysis for Physics in Milan and Uninsubria, Italy and professor of Mathematics at École Supérieure de la Chambre du Commerce et de l’Industrie de Paris. At the moment he is scientific coordinator for the Master in Data Analysis at the Italian TAG Innovation School.
Mario has more than 50 peer-reviewed scientific publications, most of them on statistical techniques for data modelling and analysis, with more than 4,000 citations.
Mario has also been responsible for the AI development at Your.MD, praised by various publications, included the Economist, as one of the most advanced AI-based symptom checker available today.

Angelo Leto has many years of experience in software engineering, implementing machine learning algorithms for NLP at CELI, medical imaging, and other data-driven contexts. He has also worked at the Abdus Salam International Center for Theoretical Physics and the International School for Advanced Studies of Trieste  implementing data processing infrastructure and optimization and porting of scientific applications to distributed environment. Recently he gave a lecture about parallel computation with apache spark at the Master in High-Performance Computing.

GetJenny was selected to be a part of Nvidia’s Inception progra. The inception program discloses to us the access to a great network of cutting-edge expertise and exclusive learning resources as well as remote access to state-of-the-art technology.

If you would like to learn more, sign up to our beta waiting list, and we’ll be in contact with you shortly.

© 2018 GetJenny. All Rights Reserved. | Privacy Policy