Daniela de Oliveira

KYC Onboarding Drop-off

Product: KYC Onboarding on mobile native apps and web
Company: Fourthline
Role: Product Designer




What is a KYC onboarding flow?

KYC stands for Know Your Customer, which in simple words mean that some businesses need to verify your identity. That usually requires photos of your ID document, a selfie, and sometimes geolocation. More layers of security can also be added depending on country regulations, such as proof of address, bank account verification, etc.

As a Product Designer, my job was…

To help simplify Fourthline’s KYC experience. It was a relatively small user flow with plenty of underlying layers of complexity.

What were the biggest quantitative findings?

When the Engineering Manager shared the data with us, we knew we had some drop off points in the KYC flow. The screens where users dropped off the most were:

👉 1. Where the user selects their country.
👉 2. Where users were required to take a tilted photo of their ID document.



How did we fix the 1st drop off point?

The mobile development team investigated the first point of drop off, finding out that people were dropping off because their country was not supported by some clients. Users dropping off due to purposeful configuration disrupted our conversion numbers, so we decided to start a “Positive drop off” approach. We created specific buttons for the user to click when their country was not supported. This way, we were able to filter out the good and bad data.



How did we fix the 2nd drop off point?

My suggestion to do usability tests was taken positively by my Design manager and Product Manager, but advertised as a complicated thing to do as we were a SaaS product, and we didn’t have a way to contact our client’s users. That was a fair point, but we did have demographic data that was being sent to us by each client, so at least we could start there.

I rolled up my sleeves and contacted our Data Analyst to get the demographics of the majority of our client’s users. The findings told us that the user base in 2021 was the following:

✏ Users median age was between 20-30 years old
✏ The majority lived in Madrid and Amsterdam
✏ 64.16% were male
✏ Client with the biggest user base was a neo bank




Based on this information and what type of apps our clients had, I created proto-personas for different profiles: Neo Bank user, Digital bank user, Store account owner, Crypto wallet user, and a Trader. These proto-personas were also based on assumptions because we couldn’t interview our client’s users, but it was good enough to start recruiting participants to test the product.



Hiring a Usability test + Participant Recruitment tool

I did my research online, and contacted a few usability test tools. After some demo calls and exchange of emails, we decided that the best tool to work with sensitive data, such as ID documents, was Userlytics as they tailored their solution to meet our needs. They weren’t cheap, but they were the best.
I created a UX research plan for our 3 platforms: Android, iOS and Web. It was not easy as there were several technical restrictions at the time.

How did you overcome the technical restrictions?

It was not so easy to bring this topic of usability testing to the teams as I expected. The web team immediately said it was not possible to do it, and the mobile team needed to arrange some things on their end to make this possible for us. It took a few months to actually get this up and running.

To test on web, we needed to ask the participant in advance for their email and phone number so we could generate their KYC ID flow. It was a lot of work to plan it, I must say. But in the end, everything worked out, and I am grateful we had Gaia and Maxime from Userlytics helping us out the entire time.

What went wrong? While I was moderating 2 of the Android usability tests that were booked for the same day, I realized that Zoom didn’t record the screen so I did those 2 tests completely blind. I did know the flow by heart and asked the users to speak aloud all their actions, and to help me out by explaining what they were seeing in detail. I believe it still gave me good insights. The technical problem happened because the Android developers had blocked the screen sharing functionality, which they fixed later for upcoming usability tests.

How did you prepare the usability tests?

In total we did 20 usability tests in 2 different time periods:

👉 1st Phase: Moderated usability tests with 5 users for web, 5 for Android (app still in Firebase)

👉 2nd Phase (planned for months later):Unmoderated usability tests with 5 participants for iOS, 5 for Android (apps already in App store/Google Play)

Since the flow was linear, I didn’t see the need to divide it in smaller tasks. We presented one scenario to the users, and at the end, we asked them a few questions about it and their previous experience with similar flows.



What were the results of the usability tests?

In the first phase of tests, the most common patterns were:

👉 All participants (10 participants!) had an issue tilting their ID document. They didn’t understand how to do it, nor how much to tilt their documents.

👉 In web, all 5 participants had difficulties taking the selfie and read the hint on the screen at the same time (the hint, which was located at the bottom of the screen read “Look straight into the camera” while the camera is on top of the phone).

👉 We discovered that participants living in Spain were confused by the label “Document number”. When the user takes the photo of their ID, the app detects the text in it with OCR (Optical Character Recognition), and pre-fills the information, which later on the user needs to confirm. While the label of the text input said “Document number”, the app was pre-filling it with the so called “Support number”. Technically, the app was taking the correct number, but people got confused because many Spanish people know their ID number by heart and that was not it, leading to many people fixing it manually.

👉 When an error occurred during the ID document scan, we needed to start over. Imagine the user selects to use their National ID card. For that document, we ask the user to take 4 photos of the document (Front, Front tilted, Back and Back tilted). If the user is on photo 3 and suddenly he gets an error, usually a time out error, while in the scan, he is sent to the 1st photo. This caused a problem because people didn’t realize they were back at the beginning, so they just repeated the 3rd photo to only understanding later they made a mistake and have to repeat the photos.

How did we fix it?

Let me go with the resolution one by one:

👉 For the ID document tilting issue, we decided to add a bottom sheet in the start of the step to alert people they needed to tilt their document. On the 2nd round of usability tests, we tested this implementation and realized soon enough it didn’t work because people are very quick to dismiss pop ups. To provide some context, we couldn’t remove the tilt document photo yet, as we use those photos to keep our risk-score very low, and for ML (Machine Learning) purposes.



👉 On our second attempt, we decided to add animations before each photo to clearly mark the steps and provide more guidance to the user. This worked and numbers improved. The image below is a draft of the idea. Later it got refined.



👉 For the text misplacement during the selfie step, we decided to move the instruction “Look straight into the camera” to the top of the screen, closer to the camera. This made more sense, and felt like a logical fix.



👉 For the users living in Spain, we wanted to make sure they were not confused by the label in the text input, so we simply corrected the copy of that label from “Document number” to “Support number”. On the 2nd phase of usability tests, we found that this improvement helped, but still there were some users confused, so the other solution was to add a tooltip with a Spanish ID image indicating which number was needed there.



👉 For the later issue of users starting over the document photo flow without noticing it, we decided that if an error happened, we would send the user to a screen before the scan starts. This way, the user is completely thrown out of the scanner, and understands they are starting again. This implementation was simple and worked like a charm.



What learnings did you get from this?

👉 In general, the participants didn’t have problems to complete the entire flow. There were a few hiccups here and there that could definitely be fixed in UX.

👉 The biggest flaw we found that was also validated through the second phase of usability tests was when users had to scan their ID document via NFC (Near-Field Communication). I didn’t want to bring this up in this case study as this took a turn of its own.

👉 We were constantly checking the data with the help of Eduardo, the mobile Engineering Manager, and I understood that it was not an easy piece of cake because not all clients had the latest versions, so we had to filter it out per version. Nonetheless, the numbers were good enough to tell us conversion increased with latest updates. Of course, it was not only UX updates, it was also technical implementations made by the engineers, such as bringing the document masks to the center of the screen and make them bigger to decrease blurriness and increase photo resolution.



Thank you for reading!