# Collect input

Add a **Collect input** block by [dragging and dropping it](https://docs.chatlayer.ai/buildabot/bot-navigation/bot-builder/flows/canvas-functionalities#drag-and-drop) to your flow.

<figure><img src="https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2Ftx7KHBrVRlA6RbiiptWk%2FScreenshot%202024-07-31%20at%2016.34.30.png?alt=media&#x26;token=8b5e58cf-1ab3-4d3d-8e7e-afba69088cb7" alt="" width="144"><figcaption><p>Collect input tab.</p></figcaption></figure>

A **Collect input** gets info from the user, checks it, and saves it as a [variable](https://docs.chatlayer.ai/bot-answers/settings/secure-variables-gdpr).

<figure><img src="https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2FWoGfoyKOT2K9oBUAFDGQ%2FScreenshot%202024-02-06%20at%2014.11.32.png?alt=media&#x26;token=e60c0260-45d4-4543-875e-760d296a3e32" alt=""><figcaption><p>What a Collect input block looks like on the canvas.</p></figcaption></figure>

A **Collect input** will typically do 3 things:

* defining [which input type](#input-types) you're looking at
* [checking if the user response matches](#check-if-a-response-matches)
* [configur](#configuration)[ing](#configuration) your bot behaviour after that

## Add a question step

A Collect input should clearly ask to the user for some input.&#x20;

<figure><img src="https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2FLHgUDKdkNyTrdO8fs8Eu%2FScreenshot%202024-11-07%20at%2015.42.26.png?alt=media&#x26;token=bb0d91f8-e85f-438a-8f03-a037496b18f5" alt="" width="375"><figcaption><p>Ask a question to the user.</p></figcaption></figure>

## Capture user response as

The **Collect input** first checks if the input is matching an [input type](#input-types).

<figure><img src="https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2FY4jP6GjJ2g9dLdbL6bfU%2FScreenshot%202024-11-07%20at%2015.44.47.png?alt=media&#x26;token=7a8feadb-b679-47bf-afe8-8ad0bcfbda79" alt="" width="375"><figcaption><p>Check if the user input matches.</p></figcaption></figure>

{% hint style="info" %}
For [voicebots](https://docs.chatlayer.ai/voice/phone-and-voice), make sure you use the **voiceMessage** input type.
{% endhint %}

If so, the **Collect input** saves that input under a [**destination variable**](#when-the-user-response-matches).

<figure><img src="https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2FGFGxmwFBaEzGB6GXML8h%2FScreenshot%202024-11-07%20at%2015.47.53.png?alt=media&#x26;token=17bac5c8-43fd-4103-81ae-f801644843a9" alt="" width="375"><figcaption><p>Save the match input into a variable.</p></figcaption></figure>

## Input types

**Collect input** blocks have 3 types of input recognition:

* [**General**](#general-input-type) input types to check if the input follows a desired format.
* [**System**](#system-entities-input-type) input types to check if the input follows a certain Chatlayer built-in entity.
* [**Entity**](https://docs.chatlayer.ai/nlp/natural-language-processing-nlp/detect-information-with-entities) input types to check the user input with a bot [entity](https://docs.chatlayer.ai/navigation/natural-language-processing-nlp/synonym-entities).

{% hint style="warning" %}
Chatlayer extracts data from user inputs. For instance, if an input plugin has a type of **date** and the input is 'I need to be in Paris *in two days*,' the parser will identify 'in two days' as the date. It converts this into the DD-MM-YYYY format and stores the result in the user session.
{% endhint %}

{% hint style="danger" %}
Please note that the number of [system entities ](https://docs.chatlayer.ai/nlp/natural-language-processing-nlp/detect-information-with-entities/system-entities)that you can use inside a **Collect input** block is limited. The system entities that you can use inside those blocks are: **sys.email**, **sys.phone\_number**, **sys.url**, **sys.number**, and **sys.time**.
{% endhint %}

### General input

The **General** input type checks if the input follows a desired format.

<details>

<summary>Any</summary>

The **Any** input type will accept all string values as an input.&#x20;

It is important to know that intents and entities are processed before parsers. This can be useful for automatically extracting certain pieces of a sentence as an answer to a question. We've provided a great example of this in our [tutorial](https://docs.chatlayer.ai/start-quickly/leadzy-tutorial/3.-collect-and-display-user-input).

</details>

<details>

<summary>Date</summary>

The **Date** input parser type will parse the response as a date. Sentences like 'next week Monday' are automatically converted to a DD-MM-YYYY date object. Supported formats (also in other supported NLP languages) are:

* *22-04-2018*
* *22-04*
* *22 apr*
* *22 april 18*
* *twenty two April 2018*
* *yesterday*
* *today*
* *now*
* *last night*
* *tomorrow, tmr*
* *in two weeks*
* *in 3 days*
* *next Monday*
* *next week Friday*
* *last/past Monday*
* *last/past week*
* *within/in 5/five days*
* *Friday/Fri*

</details>

<details>

<summary>Image</summary>

The **Image** format type allows you to check if a user has uploaded an image or other file (such as pdf).&#x20;

The image will be saved as an array. If you chose `{img}` as variable, this means that you should use `{img[0]}` to retrieve the URL for the first saved image.&#x20;

For the chat widget (web channel), we recommend using the [file upload](https://docs.chatlayer.ai/buildabot/flow-logic/message-components#file-upload) step.

To save a user's attachment at any point in the flow, use the `defaultOnFileUpload` variable. This variable will store the URL of the attachment uploaded by the user, regardless of where they are in the conversation.

</details>

<details>

<summary>Location</summary>

The **Location** parser sendsthe user's input to a Google Geocoding API service. When a correct address or location is recognized, the Chatlayer platform will automatically create an object that contains all relevant geo-data.&#x20;

<img src="https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2FeVXqAX2jWzKchaSzUWlL%2FScreenshot%202023-08-01%20at%2009.06.29.png?alt=media&#x26;token=ae128404-72c0-47ee-b700-7563caaa68f3" alt="Check the user location." data-size="original">

Look at the block above. When the user answers the question "Where do you work?" with a valid location, this information will be stored as a `userLocationInformed` variable (you can rename this variable if needed).&#x20;

Below is an example that shows how the `userLocationInformed` variable would be stored when the user responds with 'Chatlayer.ai':

```javascript
{
    fullAddress: "Oudeleeuwenrui 39, 2000 Antwerpen, Belgium",
    latitude: 51.227317,
    longitude: 4.409155999999999,
    streetNumber: "39",
    streetName: "Oudeleeuwenrui",
    city: "Antwerpen",
    country: "Belgium",
    zipcode: "2000",
}
```

To show the address as a full address (street, street number, zip code and city) you need to add some extra information to the variable: `.fullAddress`

So in the example above, the bot can display the entire location by using the following variable:`{userLocationInformed.fullAddress}`

A bot message containing the following info:

`Thank you, shall I send your package to {userLocationInformed.fullAddress}?`

Will display the following message to the user:

`Thank you, shall I send your package to Oudeleeuwenrui 39, 2000 Antwerpen, Belgium?`

</details>

<details>

<summary>Language</summary>

This input type will parse and validate NLP supported languages.

* English: (en-us): 'engels', 'English', 'en', 'anglais'
* Dutch (nl-nl): 'nederlands', 'Dutch', 'ned', 'nl', 'vlaams', 'hollands', 'be', 'ned', 'néerlandais', 'belgisch'
* French (fr-fr): 'French', 'français', 'frans', 'fr', 'francais'
* Chinese (zh-cn): 'Chinese', 'cn', 'zh', 'chinees'
* Spanish (es-es): 'Spanish', 'español', 'es', 'spaans'
* Italian (it-it): 'Italian', 'italiaans', 'italiano', 'it
* German (de-de): 'German', 'duits', 'de', 'deutsch
* Japanese (ja-jp): 'Japanese', 'japans', 'jp', '日本の
* Brazil Portugese (pt-br): 'Brazil Portugese', 'Portugese', 'portugees', 'braziliaans portugees', 'português'

</details>

<details>

<summary>voiceMessage</summary>

Use the **voiceMessage** input type to save voice channel messages as text. Configure the maximum duration and completion time for these messages.

<img src="https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2FpsWDXTDwDNeiiW0Z62cP%2FScreenshot%202023-08-01%20at%2009.08.41.png?alt=media&#x26;token=575143d5-b4c7-4aad-bf35-fcc777dd0c72" alt="" data-size="original">

</details>

<details>

<summary>Hours</summary>

This input type will parse and validate timestamps.

</details>

### System entity input

The **Collect input** parser can check if the given input is consistent with the format for one of the following [system entities](https://docs.chatlayer.ai/nlp/natural-language-processing-nlp/detect-information-with-entities/system-entities). Whenever a system entity is chosen in the 'Check if response matches' dropdown, you can give the variable a name that works for you.

<figure><img src="https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2FvmvORqmgKtTUaRpMo7YA%2FScreenshot%202023-08-01%20at%2009.11.56.png?alt=media&#x26;token=286aa4bb-31a8-4dde-9fe6-cace2e93f73f" alt="" width="375"><figcaption><p>Rename the variable that matches the system entity parser.</p></figcaption></figure>

<details>

<summary>🆕 LLM-based system entity recognition</summary>

Your bot is now capable of recognizing system entities depending on the context of the conversation, using LLM technology.

For now, this feature is only available for [system entities](https://docs.chatlayer.ai/nlp/natural-language-processing-nlp/detect-information-with-entities/system-entities).

For example:

1. Go to your bot [**Settings**](https://docs.chatlayer.ai/navigation/settings).
2. Under **Generative AI**, click the toggle next to **Turn on generative AI features**.
3. Toggle on **LLM-based entity recognition**.&#x20;

<img src="https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2F8oo0kAWYkdCj5chvlpi5%2FScreenshot%202024-11-12%20at%2011.09.04.png?alt=media&#x26;token=5cd3e3ba-0478-48e0-abaf-2576980beb14" alt="" data-size="original">

4. Click **Save**.
5. Go back to your [**Flows**](https://docs.chatlayer.ai/navigation/bot-builder/flows).
6. Create a **Collect input** that checks the number of passengers and check if it matches a @sys.number input.&#x20;

![](https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2Fdlvtmo0hqT9YBbNg1Th9%2FScreenshot%202024-11-07%20at%2014.38.07.png?alt=media\&token=76f6707b-29d3-46b7-a1b4-6f6a27741c27)

7. Display that variable in the next block.
8. Test the bot: the bot now recognizes more complex sentences as the right number of people!

<img src="https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2FjBp0VdrvbraGyK6p1wo5%2FScreenshot%202024-11-12%20at%2010.14.41.png?alt=media&#x26;token=f9c138b0-9be5-4a22-9fa5-ad1aca4915aa" alt="" data-size="original">

</details>

### Entity input

After you created an [entity](https://docs.chatlayer.ai/navigation/natural-language-processing-nlp/synonym-entities), you can check that the user input matches it.

{% hint style="info" %}
Learn more about which [entity type](https://docs.chatlayer.ai/nlp/natural-language-processing-nlp/detect-information-with-entities) suits your use case.
{% endhint %}

## Check if the input matches

A Collect input block checks whether the user response matches an already-known variable:&#x20;

* If the variable does not have a value yet, the bot will ask the question written in the Collect input block. At this point, either:
  * [The value matches](#if-a-response-matches), and the variable is filled.
  * [The value doesn't match](#if-a-response-does-not-match), and the fallback questions are asked.
  * The user doesn't answer, and you decide to [detect this silence](#no-response).
* If the variable has a value already, the bot will automatically skip the Collect input block.

### When the user response matches

If the response is matched at the time where the Collect input gets triggered, it will be saved correctly under the specified variable name in the [debugger](https://docs.chatlayer.ai/buildabot/emulator).

{% hint style="warning" %}
If the Collect input block is skipped, that is because the variable is already known. Variables can be known already for various reasons:

* The user has answered this question before.
* A previous entity was detected with the same variable name.
* The user is authenticated and the variable was automatically set.
  {% endhint %}

### 🆕 When the user response doesn't match

When the user provides an invalid response the bot should inform the user that their answer was invalid.&#x20;

#### Retries

Set up how many times you want the bot to ask the question again. Typically, this would just ask to reformulate.

<figure><img src="https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2F0WeUN6UHNhm1DCgxX785%2FScreenshot%202024-11-07%20at%2015.35.56.png?alt=media&#x26;token=009141f6-3674-446a-9260-4de8b0c4a301" alt="" width="375"><figcaption><p>Set up Retries inside Collect input blocks.</p></figcaption></figure>

#### Fallback

Set up a fallback message to where to redirect the user when the user used all their retries already. Typically, this would lead to help from customer support, for instance.

<figure><img src="https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2Ff51w2DBIklgRq5jx36cF%2FScreenshot%202024-11-07%20at%2015.37.56.png?alt=media&#x26;token=46b072ef-72d6-4670-9ff1-d86637acbcc0" alt="" width="375"><figcaption><p>Set up a fallback after X retries.</p></figcaption></figure>

#### No response

You can configure the bot to detect when a user remains silent for a specified period. It triggers a specific block when no response is received within the set timeframe or by a predetermined time.

You can set how long it takes for the new block to trigger in the duration field (in minutes or at a specific time). The duration of silence can be from 1 minute up to 1440 minutes (24 hours).

<figure><img src="https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2FtLH5wFRYHXVIsdePfv9I%2FScreenshot%202023-08-01%20at%2008.48.51.png?alt=media&#x26;token=570fd550-42b8-4f89-ac2c-fb5fe47a6f0f" alt="" width="375"><figcaption><p>You can select between 'wait for' or 'wait until' for user response before triggering a block</p></figcaption></figure>

## Capture user response as

The bottom of your Collect input block can be configured so that you're sure to detect the answer that you're looking for.&#x20;

<figure><img src="https://2786867680-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-LLTwFwbOqJj4dDhg8Ju%2Fuploads%2FRzyNxvMMf1SmPVx2a9ch%2FScreenshot%202023-08-01%20at%2009.04.28.png?alt=media&#x26;token=4c953c7a-c7fe-4bce-9d74-f229cf2006df" alt="" width="375"><figcaption></figcaption></figure>

#### Disable NLP

Users are able to leave the Collect input if an intent is recognized. For bots with a very small NLP model, this might trigger a false positive. The 'disable NLP' checkbox allows you to disable the NLP model while in the Collect input, which makes sure that whatever the user says gets saved as input.

#### For date variable: Always past - always future

When you decide to check a `date` variable, Chatlayer parses the user expression to match a default date format. If the date you ask should always be in the present or future, you can use these options. A user saying “Thursday” for example will be either mapped to last or next Thursday.

&#x20;
