5 tips for how to succeed in a chatbot project

5 tips for how to succeed in a chatbot project

The successful cooperation between Insurance Company IF and GetJenny started in the end of 2016 when IF was looking for an agile and ambitious partner for their chatbot project. IF needed the chatbot to fill the consumers wishes, not only those who already are customers of IF but the potential ones as well.

Asko Mustonen, Development Manager from IF was in charge of the project. Together with GetJenny, IF customer service and marketing teams they created Emma – a chatbot that is live 24/7 answering over 250 topics and developing weekly to help customers even further with their problems.

Plan, measure and continuously develop

Many companies are currently exploring should they invest on a chatbot. What are the pros and cons? Can a chatbot really be a beneficial part of a customer service team? Does every website have to have it just because of the hype?

Asko listed 5 tips for his colleagues around who are wondering would a chatbot be the perfect answer for taking their customer service to the next level.

  1. Be brave and think outside the box

Gather a group from different units, sit down together and analyze what would be the role of the chatbot and how do you see it affecting to your business. What are the goals for the project and how you are going to measure it? Be open to new ideas and start testing as soon as you can.

  1. Human vs. Chatbot

Be honest. Are you solving a problem that chatbot couldn’t handle but a human would? Just because chatbots are cool it doesn’t mean you should necessarily have one. Sometimes an update on the website would be the best solution.

  1. Teaching a chatbot is an everyday task

Chatbot is a team member like everyone else, it needs attention and education like we all do.  Luckily working with a chatbot is sometimes easier than with us humans. You don’t have to be technical to do it and it learns without questioning your opinions.

  1. Count the pros and cons in euros

Do the math. A chatbot project needs resources but if the project is planned carefully there will be time when chatbot is saving time from the customer service team, handling the most basic questions and allowing the team to focus on the more complex cases. Chatbot allows 24/7 customer service without adding the headcount.

  1. Find the right partners

To achieve the best results you need to partner with skilled people with great passion. Collaboration with partners that really want to understand your business and commit to the project is key to a successful chatbot project.

Got interested? Would it be the perfect time for us to meet? Book a demo here

What we did and how we did it.

 

 

Some things to consider before committing to an enterprise bot

Some things to consider before committing to an enterprise bot

Bot development costs range from under a thousand to above 100 000 USD – according to the experience of Alexander Gamanyuk over at Botmakers.

The price is determined obviously by how much work it takes to implement a bot (and how much stuff bot developers can ‘re-use’ for newer bots), but it’s still obvious that the price goes up for enterprise in-house use cases as opposed to developing a consumer facing bot.

Enterprise deployment becomes very expensive because of the nature of virtual agents: they have to learn from an existing corpus that’s relevant to their purpose and then handle information – which in this case is very sensitive, so everything either has to stay in house or a company has to pass some serious red tape and security screenings to be trustworthy.

This is also the reason why the bot development industry is adopting a licensing model rather than the SaaS approach.

Companies who are looking for in-house bot projects should then do their research not only on benchmarks and capabilities of the technology, but what’s included in the license. If you have to change something, does that come with a huge price tag, or can you modify it yourself?

This can make or break a virtual agent implementation, as changes from other parts of the company can affect it. The less flexible it is, the more you will have to pay down the line.

Alternatively, there’s always the choice of trying to do the work in-house if you have the development power to do so. We’ve talked with many companies who just tried to implement the trendiest new technology and ending up missing the mark.

Whatever the Market Leader or Company With Most PR is using might NOT a suitable solution for your needs, the same way as you wouldn’t try to fix the bolts on your door with a jackhammer.

Luckily enough there’s now great open source tools that you can set up easily. Alternatively, once you have an idea for your use case, simply build an MVP with one of the existing bot frameworks and test it with your would be users.

It’s a small effort to make compared to wasting five-six figures on building a virtual agent solution that your colleagues or customers would hate.

Building a simple FAQ bot with Starchat

Building a simple FAQ bot with Starchat

For small companies who are just dipping their toes into providing online support, you may have noticed that despite your best efforts at providing your customers with information, they come to your chat asking quite common questions.

Today we’re going to show you how to help your support staff from ripping their hair out, by building a simple bot with Starchat that can serve as a first-line of support for the most common questions.

(An example of such a bot can be seen here on our website.)

After you’ve set up Starchat with Docker, here’s the brief explanation on how it works, and what can you do with it:

NLP processing

NLP processing is of course the core of any bot. Starchat has two primary ways of triggering states: through queries and analyzers.

Queries

If the analyzer field is empty, StarChat will query Elasticsearch for the state containing the most similar sentence in the field queries. We have carefully configured Elasticsearch in order to provide good answers (e.g. boosting results where the same words appear etc), and the results are promising. But you are encouraged to use the analyzer field, documented below.

Analyzer

Through the analyzers, you can easily leverage on various NLP algorithms included in StarChat, together with NLP capabilities of Elasticsearch. You can also combine the result of those algorithms. The best way is to look at the simple example included in the CSV provided in the doc/ directory for the state forgot_password:

and(or(keyword("reset"),keyword("forgot")),keyword("password"))

The expression and and or are called the operators, while keyword is an atom.

Expressions: Atoms

Presently, the keyword(“reset”) in the example provides a very simple score: occurrence of the word reset in the user’s query divided by the total number of words. If evaluated again the sentence “Want to reset my password”, keyword(“reset”) will currently return 0.2. NB.

These are currently the expressions you can use to evaluate the correctness of a query (see DefaultFactoryAtomic and StarchatFactoryAtomic ):

keyword(“word”): as explained above, normalized
regex: evaluate a regular expression, not normalized
search(state_name): takes a state name as argument, queries elastic search and returns the score of the most similar query in the field queries of the argument’s state. In other words, it does what it would do without any analyzer, only with a normalized score -e.g. search(“lost_password_state”)
synonym(“word”): gives a normalized cosine distance between the argument and the closest word in the user’s sentence. We use word2vec, to have an idea of two words distance you can use this word2vec demo by Turku University
similar(“a whole sentence”): gives a normalized cosine distance between the argument and the closest word in the user’s sentence (word2vec)
similarState(state_name): same as above, but for the sentences in the field “queries” of the state in the argument.

Expressions: Operators

Operators evaluate the output of one or more expression and return a value. Currently, the following operators are implemented (the the source code):

boolean or: calls matches of all the expressions it contains and returns true or false. It can be called using bor
boolean and: as above, it’s called with band
boolean not: as above, bnot
conjunction: if the evaluation of the expressions it contains is normalized, and they can be seen as probabilities of them being true, this is the probability that all the expressions are all true (P(A)*P(B))
disjunction: as above, the probability that at least one is true (1-(1-P(A))*(1-P(B)))
max: takes the max score of returned by the expression arguments

Technical corner: expressions

Expressions, like keywords in the example, are called atoms, and have the following methods/members:

def evaluate(query: String): Double: produce a score. It might be normalized to 1 or not (set val isEvaluateNormalized: Boolean accordingly)
val match_threshold This is the threshold above which the expression is considered true when matches is called. NB The default value is 0.0, which is normally not ideal.
def matches(query: String): Boolean: calles evaluate and check agains the threshold…
val rx: the name of the atom, as it should be used in the analyzer field.

Configuration of the answer recommender (Knowledge Base)

Through the /knowledgebase endpoint you can add, edit and remove pairs of question and answers used by StarChat to recommend possible answers when a question arrives.

Documents containing Q&A must be structured like that:

{
 "id": "0", // id of the pair
 "conversation": "id:1000", // id of the conversation. This can be useful to external services
 "index_in_conversation": 1, // when the pair appears inside the conversation, as above
 "question": "thank you", // The question to be matched
 "answer": "you are welcome!", // The answer to be recommended
 "question_scored_terms": [ // A list of keyword and score. You can use your own keyword extractor or our Manaus (see later)
 [
 "thank",
 1.9
 ]
 ],
 "verified": true, // A variable used in some call centers
 "topics": "t1 t2", // Eventual topics to be associated
 "doctype": "normal",
 "state": "",
 "status": 0
 }

See POST /knowledgebase for an example with curl. Other calls (GET, DELETE, PUT) are used to get the state, delete it or update it.

Testing the knowledge base

Just run the example in POST /knowledgebase_search.

And voila! By configuring your bot with your existing knowledge base and beefing it up with your chat logs of most common conversations, you should have a functional first line of help.

All you have to do is to connect it to the chat system of your choice and configure when you want the bot to handle the conversation.

The advantage of “Agent side bots”

The advantage of “Agent side bots”

There’s a less frequently talked about aspect of what advanced natural language processing technology allows us to do that will have a big impact on our daily lives.

This is what we’re calling “Agent side bots”, or technology that’s deployed between different pieces of software that helps the human with their daily work.

The need for these sort of products comes from a few different things. Namely:

1) Fully automated chat bots are… not really delivering on the initial promise. Low response rates, user creep-outs, bad experiences have scarred many customers, and in return are scaring away companies from deploying them.
2) There’s simply no way to beat human decision making when it comes to human to human interactions.
3) Software that helps the user, especially in business environments, is always a winning strategy.

If you think of the evolution of modern business interfaces, it’s all about giving us more and flexible control over what we want to do and how we can do it. Search engines changed the world for a good reason, to the extent that much of our daily lives consist of searching for things on the internet (and our work: searching for things on the intranet).

But let’s break down a typical process in a business environment:

Alice asks Bob a question (through chat or e-mail or the phone),
Bob looks up the information for Alice in the system only he has access to,
Or Bob tells Alice the answer based on Bob’s knowledge of the subject.

We’ve seen a lot of attempts at trying to replace Bob in this transaction, either by turning Bob into a wikipedia or a fully automatic agent.

But typically, Alice’s question might be unique only from Alice’s perspective, while Bob has to handle the same question 10-100 times a day!

And because we have logs of these interactions, and advanced enough technology, we can build Agent Side Bots to help Bob. Add a layer to the software that they both use, and the agent side bob can interpret Alice’s question, and recommend Bob the answer or the action – based on previous cases.

The task of interpreting, searching and replying becomes as easy as clicking a button – basically accepting the suggestion made by the Agent Side Bot. You can also design these systems in a way that Bob can customize the answer: but you significantly cut down the time it takes to handle such a task.

Moreover, the personal assistant gets better and better with every conversation: making better suggestions over a greater variety of topics.

One early and useful example of such technology was Google’s smart replies: but thankfully, now we can build these kind of solutions ourselves, tailored to our own products.

Businesses globally are helping their workers with technology on a varying degree, because state of the art technology has been historically expensive to implement and difficult to customize: so in some cases, the Bobs and Alices of the world are using Notepad or approved templates or literal sticky notes to help them with these sort of routine communication tasks.

We hope that by the proliferation of agent side bots, their lives will be much easier in the future.

Introducing starchat, an open source, scalable conversational engine for B2B applications

Introducing starchat, an open source, scalable conversational engine for B2B applications

We’ve long been saying that the hype around chatbots might die down one day. People are discovering that not everything becomes instantly better if you wrap it under a conversational UI – some things are better handled with buttons.

However, apart from fun experiments, conversational engines are finding their roles in our increasingly automated society.

It’s still a long road ahead though – after all, if even humans have trouble understanding language sometimes, then it would be very optimistic to demand the same capability from our machines. With that being said, we believe in openness, and that if we want to get to that future where our machines can reasonably understand human commands, allowing them to learn processes the same way as we teach human workers, then we need to build that future together.

The problem is, right now if you want to develop your own chatbot, you have to rely on a closed source NLP engine, either provided by Google, IBM or Facebook. Free-tiers aside, this puts conversational agent companies between a rock and a hard place, as terms can change any time.

That’s why the core of our technology, starchat, is open source. We’re welcoming all developers who are interested in tinkering, experimenting or improving the conversational engine, and to find use cases for it that we haven’t even dreamed of.

At the present, we use it to power customer support roles – by training the system on existing support cases, it can handle a solid portion of customer chats on it’s own, depending on the data quality. The proof is in the pudding, and if our clients are confident enough to trust the technology – because it delivers results – we think you should too.

It’s also very easy to train bots with – and to demonstrate, we have built a FAQ conversational bot about our own business, that you can play around with here.

We’ll be showing off more of the technology in the following days – how we built the bot for example, and how you can do the same with it.

In the meantime, you can get started with starchat here, hosted on GitHub.

5 reasons why we recommend the Barclays Techstars Accelerator – and one why we don’t

5 reasons why we recommend the Barclays Techstars Accelerator – and one why we don’t

We just finished our 90 day program in the Barclays Techstars accelerator. It’s hard to sum up all of our experiences in a short blogpost, and we’re still feeling the haze after demo day (the culmination of the program). In short, we had a blast, and here are the X key points why we would recommend any young company to attend the Techstars accelerator:

1. The value of the network

Whether you’re a B2B or a B2C company, the name of the game in business development is networking. Techstars is as much (if not more) of a community of entrepreneurs as an accelerator program, and it shows heavily. Not only you are introduced to people who have tackled the same problems you’re facing, they do their very best to introduce you relevant people who can help your company reach the next level. Be it mentorships, advisors, talent, investors, strategic partnerships or your dream customer – they’ll bust their asses of to get you in the same room (or at least a call) with that person.

Introverted type? No problem. Techstars has networking engineered to a scaleable-while-personal method that they teach you how to use to your benefit.

2. Insane mentorship

A lot of challenges in the entrepreneurial road is about learning the right things and executing them. There’s a whole industry of consulting out there, and vetting what and who to follow is an exhausting job in itself. Think of the Techstars community as having done that vetting for you, and what’s more, the workshop structure allows you to glance in to the day-to-day operations of the world’s best professionals in their given field.

Imagine learning how to do sales from someone who sold a million dollars worth of SaaS through LinkedIn in a year. Now imagine learning how the pros use LinkedIn from the person who designed it.

INSANE.

3. The accelerator effect

Your surrounded by driven founders and an amazing team of associates there to push you to your limit. The accelerator is a place where time flows at a different pace, and you get to do more faster. There’s a reason that accelerated startups have a higher chance of survival then non-accelerated ones: and one of the reasons is that you get set on the right track during your time here. Mistakes that could cost you your business can be avoided, crucial pivots can be made in time, and you’re motivated to get the most out of it. You might think you can make it on your own – but if there’s an amazing opportunity out there to get you to the next level, then you’d be a fool not to take it.

4. Relevant business partners

Barclays Techstars carries this in it’s name – by partnering with a leading bank, the fintech focus is obvious. In this field trust is key, and you have a chance to gain the trust of one of the biggest players in the industry, which can make or break you at the early stages. A lot of big enterprises out there are trying to source innovation from startups, but because of their immense size, even a ball that gets rolling might get stuck on the way. We were very pleasantly surprised to see that they’re doing their very best to get us to relevant decision makers and provide advice on how to navigate the Big Corporate hurdles that can throw a young upstart into a tailspin. All in all very professional, and we expect other big enterprises to adopt this sort of tight cooperation with experienced accelerators for mutual benefit.

5. Enormous, but still tight

Techstars is now a global program, but because it’s deployed in so many locations, they can still run tight programs. Especially when it comes to running the investment circuit, it makes a difference if they have to choose between fifty or only a dozen or so. The commitment the program managers and the associates can make towards helping you also benefits from this, as you’re not ‘lost’ among the many. There’s thousands of startups coming out of different accelerator programs every year, and you can be a part of a tight unit that’s pledged to make you successful.

Oh, and the one reason we wouldn’t recommend it?


Time flies too fast!

This might be specific to the program we did – after all, Tel Aviv is a fun town with great food, friendly people, a lively culture and a lovely beach. We had a blast both on- and after hours, and while the 90 days initially seemed like a large commitment (it’s basically an entire business quarter) as the program was reaching for the end, many of us were already plotting on getting back here.

If you fall in love with places easily, unless you want to feel a secondary home to feel sick for, and can disregard the other enormous benefits of the program, then don’t apply. Simple as that! 🙂

All in all: Barclays Techstars in Tel Aviv was a 10/10.

LivePerson partners with Jenny for the Live Engage Bot platform

LivePerson partners with Jenny for the Live Engage Bot platform

We are proud to announce to be a part of the select few AI companies helping LivePerson, the leading provider of cloud mobile and online business messaging solutions to run the world’s first enterprise level bot management platform.

LiveEngage for bots allows large brands to deploy, manage, and measure bots they build on their own on LivePerson’s open framework — as well as bots from third parties — to provide customer care and sales assistance to consumers.

Instead of “set and forget” bots that run unsupervised, it serves as an additional layer of analytics and intelligence into AI, which helps businesses get a better understanding of the the effectiveness of bots in their customer care operations.

Through LiveEngage’s open framework, businesses can build their own bots, or bring in bots from a third-party developer, to be managed on the LiveEngage for Bots platform. To bring this platform to life, LivePerson has partnered with a number of bot and AI providers, including IBM Watson, which is already running at large banks and telcos. Other LiveEngage for Bots partners include a number of start-ups doing innovative work in the bots space, including NextIT, Robotify, Bot Central, Get Jenny and Chatfuel, whose bot, running on the LiveEngage for Bots platform, was recently showcased at Facebook’s F8 conference.

We are looking forward to be a part of this initiative, to help businesses bring their customer service to the next level and give consumers the help they deserve. Our professional tools and open-source technology will be deployed on the busiest intersections of communication, helping businesses scale their operations with increasing demand for live engagement.

On hype driven machine learning

On hype driven machine learning

(Original image from the amazing Shivon Zilis)

This is the competitive landscape for machine learning as of now. Countless posts have been written lately on who and what you need to follow in order to navigate this landscape, and rightly so. It’s already enormous, and with any industry that’s just awakening, the boundaries are not yet clear and everything is really up in the air.

In short, if you were to want to build a chat application for your business or for fun, and looked at this chart, you would be confused on where to start.

And no wonder: the latest buzzwords are AI and machine learning, startups all across the globe are getting in on the game, plenty who are just tacking the words on in hopes of quick funding – not without merit, as we’ve seen that investors are swarming in and some big exits have already been done. It’s a gold rush.

And it’s hectic.

Now, coming down to the fact of the matter, for a working conversational agent (or chat interface, or bot, or human-like automation) you need three things:

– A good language model
– Data to train the agent on
– Connector functions, eg. what systems (chat and otherwise) the agent will interact with.

Let’s focus on the first two, as the third one is more or less covered by the market adequately at this point.

The talk of the town is Neural Networks – and rightly so. The technology which has been around since the ’70s is now very accessible. Computing power is now cheap enough to build large neural nets, and companies are fitting this need very well. Nvidia’s pivot comes to mind as one of the most successful ones.

However, our previous experience shows that “purely Neural Network” approach, which analyses only the usually scarce amount of sentences produced by even big corporations (counting far below the billion of words), leads to extremely poor results.

Neural networks require a lot of representative data, something that most companies don’t have lying around. You can use generative neural nets to create simple question – answering bots, but the more specific the task becomes, the more this approach falls apart. Sure, anybody can download a big data set and train a TensorFlow model following a recipe to answer questions, but such a thing is useless if you want to have your bot to take specific actions – making a payment, placing an order, looking up customer info, etc.

To make a paragon in real life, it is like asking a child to learn Finnish and how to answer to customer service questions after just having listened to the few thousands of sentences in a customer service log –sentences which are most of times relatively similar. Computers are good at analyzing huge amount of data (in the exabyte range), and the size of logs recording past conversations in customer service are usually a factor one million below that.

Like all hard problems, it comes down not to the tools you use, but how you approach the problem. We believe that it pays off not to do hype driven development, but to go with what works. For a multilingual environment with limited amount of data available, you need to use multiple different tools and not just follow the trend, and stick with what works.

We are confident that our conclusions are correct, based on years of experience at the forefront of machine learning research:

Mario Alemi was associate scientist at CERN developing algorithm for LHCb for five years, professor of Data Analysis for Physics in Milan and Uninsubria, Italy and professor of Mathematics at École Supérieure de la Chambre du Commerce et de l’Industrie de Paris. At the moment he is scientific coordinator for the Master in Data Analysis at the Italian TAG Innovation School.
Mario has more than 50 peer-reviewed scientific publications, most of them on statistical techniques for data modelling and analysis, with more than 4,000 citations.
Mario has also been responsible for the AI development at Your.MD, praised by various publications, included the Economist, as one of the most advanced AI-based symptom checker available today.

Angelo Leto has many years of experience in software engineering, implementing machine learning algorithms for NLP at CELI, medical imaging, and other data-driven contexts. He has also worked at the Abdus Salam International Center for Theoretical Physics and the International School for Advanced Studies of Trieste  implementing data processing infrastructure and optimization and porting of scientific applications to distributed environment. Recently he gave a lecture about parallel computation with apache spark at the Master in High-Performance Computing.

GetJenny was selected to be a part of Nvidia’s Inception progra. The inception program discloses to us the access to a great network of cutting-edge expertise and exclusive learning resources as well as remote access to state-of-the-art technology.

If you would like to learn more, sign up to our beta waiting list, and we’ll be in contact with you shortly.

GetJenny enters Barclays Techstars Tel Aviv!

GetJenny enters Barclays Techstars Tel Aviv!

Our team is very excited to be a part of this year’s Techstars batch in their Tel Aviv-based FinTech program.

We look forward to the next three months of intense work among some of the brightest minds and entrepreneurs of the Startup Nation.

This unique partnership between Barclays and Techstars brings two networks together into one accelerator program that offers entrepreneurs unprecedented access not only to a world leading bank, but also to Techstars international mentor and investor relationships.

This round includes eight companies who are aiming to change the financial industry with cutting edge technology. All of us will be working hard both on our technology and our offerings under the program mentors.

Exciting times are ahead for all of us, and the countdown until this year’s Demo Day (May 10) is already displayed at our offices.

This year in AI

This year in AI

2016 was the year that chatbots and AI got BIG. It was a busy year for all of us, with over 34,000 bots released for Facebook Messenger alone, and now we have a bit of clarity on whether it’s going to be a passing fad or the bots are here to stay.

HubSpot co-founder and CTO Dharmesh Shah asked an important question: “Are chatbots going to be a passing fad or the next big thing?”

His answer: “I think they’re going to be the biggest wave we’ve seen in technology in the last two decades.” For marketers, software that you can converse with using natural language is forever changing the way users discover content.

And to that extent, he’s not wrong. More and more websites we visit greet you with an automated message through a chat box, and you might get the answers to your questions even before realizing you’re conversing with a bot.

So the fact that the tech has finally caught up to the theory is paying off in different ways. One of the most interesting players in this space is Nvidia, which is moving from powering video cards to powering deep learning.

To give you an idea of where bots are now, have a look at Telegram’s “Fast and useful” competition winners:

@icon8bot – an AI-based bot for applying filters to photos using neural networks and machine learning tech
@integram_bot – a bot that supports setting up integrations with third-party services for developers such us Trello, Gitlab and Bitbucket
@werewolfbot – a ‘Werewolf Game’ bot for group chats which Telegram says has been popular in SE Asia
@strangerbot – a bot that connects two random users to chat anonymously which Telegram claims “gained huge popularity after going viral in Southern Europe”
@octopocket_bot – a smart-wallet bot that allows users from the EU to transfer money to each other and is integrated into the ATM system of Spain

If you’re an old web user, you may recognize some of these bots as resurfaced IRC bots from the heyday, but the consumer applications are fun and engaging. These also highlight key features of bot technology: connecting services with a conversational UI on top of it for easy use. This lowers the technical barrier to entry that can have some great applications, as for that role businesses now typically employ people who are familiar with the services themselves to translate customer / other departments wishes.

However, there was also a bunch of chatbots that missed the target. Be it the fact that they’re neither artificial or intelligent, a plethora of companies took on the bot wave and just tried cramming a feature that was finely supported by boring old UI-s under a conversational engine.

Owals Afaq over at techcrunch went through the pain of testing some of these services.

The takeaway here is that if a chatbot tries to be too general, promises too much or tries to be too broad – then no matter the marketing or shiny UI, it’s going to suck, and it’s going to tank.

In other notes, there’s been some interesting projects – like the much awaited viv.ai, which aims to connect a bunch of different services under a conversational system. There’s also a lot of services coming out that aim to help people build their own bots with their technology (such as kitt.ai).

And if that wasn’t enough to convince you that these bots are here to stay, just look at the acquisitions: the tech giants all rushed to buy AI service providers to broaden their own intelligent agent offerings. While the numbers are undisclosed, within the startup circuit these actions will bring more players to the table.

This is further emphasized by the fact that “generic chatbots” are pivoting towards “purpose specific” bots – an obvious play that focuses both on their dominant use cases and takes advantage of the fact that – bluntly put – we can do specific right, and we fail when we try to be too generic.

It’s been an exciting year for all of us, and we can’t wait to see what will next year bring!

© 2017 Jenny. All Rights Reserved.