Introduction :

Bot Framework enables you to build bots that support different types of interactions with users. You can design conversations in your bot to be free. Your bot can also have more guided interactions where it provides the users with choices or actions. The conversation can use simple text strings or more complex rich cards that contain text, images, and action buttons. And, you can add natural language interactions, which let your users interact with your bots in a natural and expressive way.

We are going to create and build Xamarin FAQ Bot using Azure Bot Service and deploy into Webchat .in this article, we are not going to write any coding for implement Xamarin FAQ Bot but be ready with what are the question and answer to train the Bot. I have already created 7000+ more Xamarin Q A as word document, we will use same document for upload and train knowledge base.


Create a QnA Service:

Step 1: 

Navigate to https://qnamaker.ai/ and Sign in using Microsoft Account.

Step 2: 

Click on “Create a knowledge base” from main menu

Step 3:

You can skip “Create a QnA service” step, let we publish QnA service after create knowledge base.


Step 4: 

Provide QnA Knowledge base basic information




Step 5:

 You can extract question and answer pairs from an online FAQ, manuals Entry and upload files with .tsv,.pdf,.doc,.docx,.xls format. If you are trying to enter manual entry skip this step.
Add chit-chat to your bot, to make your bot more conversational and engaging, with low effort. You can easily add chit-chat data sets for 3 pre-defined personalities when creating your KB, and change them at any timer. Chit-chat is supported for 3 predefined personalities

  • The Professional
  • The Friend
  • The Comic

Step 5:

 Click on “Create your KB”


Step 6: 

Wait for a few seconds for load all the knowledge base Q & A to the Online Editor.

Step 7: 

QnA service has loaded our FAQs Editor with two-column knowledge base, without any extra tweaking needed from you. Now you can edit and modify old Q&A and also Select Add new QnA pair to input other greetings and responses.




Step 8: 

The main menu having different option, Edit, Publish, Save, train, test and Settings. While click on Edit the above knowledge base edit screen will open, you can search and filter the question and Edit. After Edit always click on “Save and train” menu option for Save.


Step 9: 

Click on “Publish”. once you publish the knowledge base endpoint available for use in your bot or App

Step 10: 


The Knowledge base will generate following, you can make note. Need to update below details in Azure Hosting.
  • Knowledge base Key
  • Host Address
  • EndPointKey

Create and Publish QnA Bot in Azure:

Step 1: 

Navigate and Login to https://portal.azure.com/.

Step 2: 

Select + Create a resource > Select “AI Machine Learning” > Click on “Web App Bot”





Step 3: 

Let us start create Web App Bot, Provide the Bot Name, resource, location and also follow the step 4, Step 5 for select Bot template and prizing and click on Create.

Step 4: 

You can use the v3 templates, select SDK version of SDK v3 and SDK language of C# or Node.js. Select the Question and Answer template for the Bot template field, then save the template settings by selecting Select.



Step 5: 

You can choose a pricing tier for Bot Search service

 

Step 6: 

Review your settings, then select Create. This creates and deploys the bot service with XamarinQA to Azure .


Step 7: 

Open Xamarin BotQA App Service from All Resource > Application Settings and edit the QnAKnowledgebaseId, QnAAuthKey, and the QnAEndpointHostName fields to contain the values of your QnA Maker knowledge base. Like below


Test and Implement Web Chat App:

In the Azure portal, click on Test in Web Chat to test the bot and Click on Channel > Deploy bot Application to Web Chat and Implement into your Website or Application. 



Summary

In this article, you learned Create, train, and publish your QnA Maker knowledge base. I have created 7000+ Xamarin Q A knowledge base and deployed into my blog (www.devenvexe.com) and Xamarin Q A Facebook Page, you can try out for demo and If you have any questions/feedback/ issues, please write in the comment box.



Introduction:

Microsoft Cognitive services is set of cloud based intelligence APIs for building richer and smarter application development. Cognitive API will use for Search meta data from Photos and video and emotions, sentiment analysis and authenticating speakers via voice verification.



The Computer Vision API will help developers to identify the objects with access to advanced algorithms for processing images and returning image meta data information. In this article, you will learn about Computer Vision API and how to implement Compute Vision API into Bot application.

You can follow below steps for implement object detection in Bot Application

Computer Vision API Key Creation:

Computer Vision ApI returns information about visual content found in an image. You can follow below steps for create Vision API key.
Navigate to https://azure.microsoft.com/en-us/try/cognitive-services/

Click on “Get API Key “or Login with Azure login.
Login with Microsoft Account and Get API key 


Copy API key and store securely, we will use this API key into our application

Step 2: 

Create New Bot Application:

Let's create a new bot application using Visual Studio 2017. Open Visual Studio > Select File > Create New Project (Ctrl + Shift +N) > Select Bot application.



The Bot application template gets created with all the components and all required NuGet references installed in the solutions.



In this solution, we are going edit Messagecontroller and add Service class.
Install Microsoft.ProjectOxford.Vision Nuget Package:
The Microsoft project oxford vision nuget package will help for access cognitive service so Install “Microsoft.ProjectOxford.Vision” Library from the solution


Create Vision Service:

Create new helper class to the project called VisionService that wraps around the functionality from the VisionServiceClient from Cognitive Services and only returns what we currently need. 

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using System.Web;
using Microsoft.ProjectOxford.Vision;
using Microsoft.ProjectOxford.Vision.Contract;
namespace BotObjectDetection.Service
{
public class VisionService : ICaptionService
{
/// <summary>
/// Microsoft Computer Vision API key.
/// </summary>
private static readonly string ApiKey = "<API Key>";
/// <summary>
/// The set of visual features we want from the Vision API.
/// </summary>
private static readonly VisualFeature[] VisualFeatures = { VisualFeature.Description };
public async Task<string> GetCaptionAsync(string url)
{
var client = new VisionServiceClient(ApiKey);
var result = await client.AnalyzeImageAsync(url, VisualFeatures);
return ProcessAnalysisResult(result);
}
public async Task<string> GetCaptionAsync(Stream stream)
{
var client = new VisionServiceClient(ApiKey);
var result = await client.AnalyzeImageAsync(stream, VisualFeatures);
return ProcessAnalysisResult(result);
}
/// <summary>
/// Processes the analysis result.
/// </summary>
/// <param name="result">The result.</param>
/// <returns>The caption if found, error message otherwise.</returns>
private static string ProcessAnalysisResult(AnalysisResult result)
{
string message = result?.Description?.Captions.FirstOrDefault()?.Text;
return string.IsNullOrEmpty(message) ?
"Couldn't find a caption for this one" :
"I think it's " + message;
}
}
}
In the above helper class, replace vision API key and call the Analyze image client method for identify image meta data

Messages Controller:

MessagesController is created by default and it is the main entry point of the application. it will call our helper service class which will handle the interaction with the Microsoft APIs. You can update “Post” method like below 

using System;
using System.Diagnostics;
using System.IO;
using System.Linq;
using System.Net;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text.RegularExpressions;
using System.Threading.Tasks;
using System.Web.Http;
using BotObjectDetection.Service;
using Microsoft.Bot.Builder.Dialogs;
using Microsoft.Bot.Connector;
namespace BotObjectDetection
{
[BotAuthentication]
public class MessagesController : ApiController
{
private readonly ICaptionService captionService = new VisionService();
/// <summary>
/// POST: api/Messages
/// Receive a message from a user and reply to it
/// </summary>
public async Task<HttpResponseMessage> Post([FromBody]Activity activity)
{
if (activity.Type == ActivityTypes.Message)
{
//await Conversation.SendAsync(activity, () => new Dialogs.RootDialog());
var connector = new ConnectorClient(new Uri(activity.ServiceUrl));
string message;
try
{
message = await this.GetCaptionAsync(activity, connector);
}
catch (Exception)
{
message = "I am object Detection Bot , You can Upload or share Image Url ";
}

Activity reply = activity.CreateReply(message);
await connector.Conversations.ReplyToActivityAsync(reply);
}
else
{
HandleSystemMessage(activity);
}
var response = Request.CreateResponse(HttpStatusCode.OK);
return response;
}

private Activity HandleSystemMessage(Activity message)
{
if (message.Type == ActivityTypes.DeleteUserData)
{
// Implement user deletion here
// If we handle user deletion, return a real message
}
else if (message.Type == ActivityTypes.ConversationUpdate)
{
// Handle conversation state changes, like members being added and removed
// Use Activity.MembersAdded and Activity.MembersRemoved and Activity.Action for info
// Not available in all channels
}
else if (message.Type == ActivityTypes.ContactRelationUpdate)
{
// Handle add/remove from contact lists
// Activity.From + Activity.Action represent what happened
}
else if (message.Type == ActivityTypes.Typing)
{
// Handle knowing tha the user is typing
}
else if (message.Type == ActivityTypes.Ping)
{
}
return null;
}
private async Task<string> GetCaptionAsync(Activity activity, ConnectorClient connector)
{
var imageAttachment = activity.Attachments?.FirstOrDefault(a => a.ContentType.Contains("image"));
if (imageAttachment != null)
{
using (var stream = await GetImageStream(connector, imageAttachment))
{
return await this.captionService.GetCaptionAsync(stream);
}
}
string url;
if (TryParseAnchorTag(activity.Text, out url))
{
return await this.captionService.GetCaptionAsync(url);
}

if (Uri.IsWellFormedUriString(activity.Text, UriKind.Absolute))
{
return await this.captionService.GetCaptionAsync(activity.Text);
}
// If we reach here then the activity is neither an image attachment nor an image URL.
throw new ArgumentException("The activity doesn't contain a valid image attachment or an image URL.");
}
private static async Task<Stream> GetImageStream(ConnectorClient connector, Attachment imageAttachment)
{
using (var httpClient = new HttpClient())
{
var uri = new Uri(imageAttachment.ContentUrl);
return await httpClient.GetStreamAsync(uri);
}
}
private static bool TryParseAnchorTag(string text, out string url)
{
var regex = new Regex("^<a href=\"(?<href>[^\"]*)\">[^<]*</a>$", RegexOptions.IgnoreCase);
url = regex.Matches(text).OfType<Match>().Select(m => m.Groups["href"].Value).FirstOrDefault();
return url != null;
}
}
}

Run Bot Application

The emulator is a desktop application that lets us test and debug our bot on localhost. Now, you can click on "Run the application" in Visual studio and execute in the browser


  • Test Application on Bot Emulator
  • You can follow the below steps to test your bot application.
  • Open Bot Emulator.
  • Copy the above localhost url and paste it in emulator e.g. - http://localHost:3979
  • You can append the /api/messages in the above url; e.g. - http://localHost:3979/api/messages.
  • You won't need to specify Microsoft App ID and Microsoft App Password for localhost testing, so click on "Connect".

Related Article:

I have explained about Bot framework Installation, deployment and implementation in the below article

Summary

In this article, you learned how to create an Intelligent Image Object Detection Bot using Microsoft Cognitive Computer Vision API. If you have any questions/feedback/ issues, please write in the comment box.

Introduction :

Bot Framework enables you to build bots that support different types of interactions with users. You can design conversations in your bot to be free. Your bot can also have more guided interactions where it provides the users with choices or actions. The conversation can use simple text strings or more complex rich cards that contain text, images, and action buttons. And, you can add natural language interactions, which let your users interact with your bots in a natural and expressive way.

The Bot Builder SDK for .NET is an easy-to-use framework for developing bots using Visual Studio in Windows but for Visual Studio for Mac, it is not available in the official release. I have modified the Bot Framework template to work on Visual Studio for Mac and started using all the Bot Framework features on my Mac machine.

In this article, I am showing how to create, build, and test a Bot application using a Mac machine.


Prerequisites

  • Download and install Visual Studio for Mac
  • Clone and download the Bot Framework Project Template for Mac.
  • Download and install the Bot Framework Emulator for Mac.
  • Configure and Register Project Template

Step 1

Clone and download the Bot Framework template for Mac from the following URL - https://goo.gl/9ivoov

Step 2

Open the *.Csproj file or Visual Studio solution.

Step 3

Select and right-click on “Project” from Visual Studio Mac > “Restore NuGet packages”.

Step 4

Right-click on Project, select the Project Options, select XSP Web server and expand the Run option. Update the port number to 3978 like in the below screen.



Step 5

Build the solution. If it has successfully completed the Build, the project template gets added into the "Custom Folders" from Visual Studio Preference.


Create a Bot Application

Let's start with creating a new bot application in Visual Studio for Mac. Open Visual Studio 2017, create a new project with C#. Select the Bot applications template as below.



Provide the project Name, solution name, and location as below.



The bot application gets created with all the components and all required NuGet references installed.


Update the code

The default application adds a simple code snippet and we have no need to change anything. If you want to test your custom message, you can change it like below.

You can find the messagereceiveAsync method from Dialogs/RootDialog.cs file. In this method, the activity.Text container will return the user text input so that you can reply to a message based on the input text.

private async Task MessageReceivedAsync(IDialogContext context, IAwaitable < object > result) {
var activity = await result as Activity;
// calculate something for us to return
int length = (activity.Text ? ? string.Empty).Length;
// return our reply to the user
//test
if (activity.Text.Contains("technology")) {
await context.PostAsync("Refer C# corner website for tecnology http://www.c-sharpcorner.com/");
} else if (activity.Text.Contains("morning")) {
await context.PostAsync("Hello !! Good Morning , Have a nice Day");
}
//test
else if (activity.Text.Contains("night")) {
await context.PostAsync(" Good night and Sweetest Dreams with Bot Application ");
} else if (activity.Text.Contains("date")) {
await context.PostAsync(DateTime.Now.ToString());
} else {
await context.PostAsync($ "You sent {activity.Text} which was {length} characters");
}
Wait(MessageReceivedAsync);
}

Run Bot Application

Emulator is a desktop application that lets you test and debug your bot on localhost or remotely. Now, you can click on "Run the application" in any browser.


Install Bot Emulator

If you have not installed the Bot Emulator on Mac, you need to download and install the emulator for testing the bot application. You can download the Bot Emulator from - https://goo.gl/kZkoJT


Follow the below steps to test your bot application on Mac.

  • Open Bot Emulator.
  • Click "New Bot Configuration".
  • Copy the above localhost URL and paste it into the emulator. For example - http://127.0.0.1:3978
  • You can append the /api/message in the above URL; e.g. - http://127.0.0.1:3978/api/messages.
  • You won't need to specify the MSA ID and MSA password for localhost testing. So, click on "Save and Connect".


You can send a message to the bot application. The bot will reply as per your guide/code.


Summary

In this article, you learned how to create a bot application using Visual Studio for Mac. If you have any question, feedback, or issues, please write in the comment box.
Xamarin Developer Interview questions and answers Bot is ready to use in Line app. Xamarin FAQ Bot will be ready with 7000+ more Xamarin QnA’s. Now you can start open your Line App and > click on three dot line > Scan the following QR Code to add Xamarin QA bot as a friend.



Microsoft Learning blog posted, Microsoft Azure and Data and AI certification changes will be there on March 2020 .

Currently finalizing the updates related to Azure Administrator, Developer, Architect, and AI Engineer. Microsoft will be publishing the updated exams in the next few months, but Microsoft will leave the old exam in the market for 90 days after the new version becomes available.If you have been preparing for the current version of the exam, you can still take it during this transition period if you want; however, these versions of the exam will retire at the end of that 90 day window “



Do you want to know the reasons for these changes. You can read more from the Microsoft Learning portal: Read more here







Introduction:

Azure Content Moderator API is a cognitive service that checks text, image, and video content for material that is potentially offensive, risky, or otherwise undesirable. When such material is found, the service applies appropriate labels (flags) to the content. Mobile Application/ Website handle flagged content in order to comply with regulations or maintain the intended environment for users.

Microsoft published online Content Moderator Review tool to test the Text, image and video functionality of Content Moderator without having to write any code. If you wish to integrate into the application, let review and look all content moderator Demo quickly and you can read my previous article to understand the Cognitive Service Content Moderator.


Setup Moderator Portal

Step 1: Navigate content moderator portal and register.
Step 2: Login using your registered login/ Microsoft or work login.
Step 3: Create a team and invite your team to join the portal


Try Content Moderator

Now, you are ready to try the content moderator, in the main menu select Try and select operation like text, image and video and upload the content, application will return following result from relevant API.


Text Content Moderator

Navigate content moderator portal > Click on Try > Select “Text” and provide your text of up to 1024 characters at a time and currently will support only English. Score is between 0 and 1. The higher the score, the higher the model is predicting that the category may be applicable. This feature relies on a statistical model rather than manually coded outcomes. We recommend testing with your own content to determine how each category aligns to your requirements.

  • Category1 refers to potential presence of language that may be considered sexually explicit or adult in certain situations.
  • Category2 refers to potential presence of language that may be considered sexually suggestive or mature in certain situations.
  • Category3 refers to potential presence of language that may be considered offensive in certain situations.
The personal data feature detects the potential presence of this information:
  • Email address
  • US Mailing address
  • IP address
  • US Phone number
  • UK Phone number
  • Social Security Number (SSN)

Image Content Moderator

Navigate content moderator portal > Click on Try > Select “Image” and upload the image. Scan images for text content and extract that text, and detect faces. The Evaluate operation returns a confidence score between 0 and 1. It also returns Boolean data equal to true or false. These values predict whether the image contains potential adult or racy content



Video Content Moderator

Navigate content moderator portal > Click on Try > Select “Video” and upload the Video. Machine-assisted video classification is either achieved with image trained models or video trained models. Unlike image-trained video classifiers, Microsoft's adult and racy video classifier is trained with videos

Summary: 

I hope you have understood the What is Azure Content moderator and Content moderator Web portal. The next step is how to create the Azure API and implement to the application using C# . Please leave your feedback/query using the comments box, and if you like this article, please share it with your friends.
****The challenge begins 23 September 2019, and ends on 23 October 2019.*****

I have successfully completed the Xamairn Azure Function Challenge and the goal of the challenge is to create a serverless Azure Function and connect it to a Xamarin mobile app. You can refer one of my previous article for Building Serverless Mobile App With Azure Functions Using Xamarin.Forms.

Are you interested to do? if xamairn setup is ready in your machine, then spend only 15 min for completing the challenge, otherwise it required some prerequisites installation setup.


Challenge Prize:

Ten (10) Grand Prizes: Each winner will receive Microsoft Surface Headphones

One Thousand (1,000) Prizes: Each winner will receive a 3-month Xbox Game Pass
Prerequisites:

You can use any Windows or macOS for development.

On Windows, Download and install Visual Studio 2019 Community (i.e. free) with the Xamarin workload using our Xamarin-optimized installer.

On macOS, Download and install Visual Studio 2019 Community for Mac.

To deploy and run your Azure Functions, you first need an Azure account. You can sign up for a FREE account here.

Start the Azure Functions Challenge:

Task 1: Clone Xamarin Azure Challenge project from github repository.
Task 2: On Windows Machine, refer here for publish the Azure function or if mac machine, refer here for publish the Azure function
Task 3: Configure the Azure function to the Azure portal
Task 4: Configure the Xamarin App.
Task 5: Run the Xamarin application using iOS or Android Platform .

Congratulations:

Provide the Basic your information, accept the terms and click on submit. You've successfully completed the Xamarin Azure Challenge. You will get email very shortly. All the best

As a Mobile Developer, when we start a new project, we always search and speak about application architecture. One of the most common choices in Xamarin.Forms is MVVM, this is due to the small amount of work to implement it, if we use ReactiveUI, we can additionally write applications in a reactive manner. It’s a time to check reactive UI how we can implement in Xamairn Forms project.

Reactive Extensions have been around for many years, and is available to most development environments. In this post, we are specifically going to look at Rx in terms of .NET development in Xamarin Forms. 

Rx is just a library for composing asynchronous, and event-based code with observables, and configuring it via LINQ. You can use LINQ to define criteria of when you want to perform an action, on the event. Rx can do more, but we will only look at the very basics in this post.

ReactiveUI allows you to combine the MVVM pattern with Reactive Programming using such features, as WhenAnyValue, ReactiveCommand, ObservableAsPropertyHelper, Binding and WhenActivated.

Create New Xamarin.Forms Application

In order to learn the ReactiveUI, let’s start creating a new Xamarin.Forms project using Visual Studio 2019 or VS for Mac. When using Visual Studio 2019 on a Windows machine, you will need to pair the mac machine for run and build the iOS platform.

Open Visual Studio 2019 >>Create a New Project or select "Open Recent Application" and the available templates will appear on a Windows like below. Select Xamarin.Form app and click on “Next”

ReactiveUI Nuget Package

To implement ReactiveUI in our applications we will need to install the library. 

Step 1: Right click on Project and Click on Manage NuGet Packages for Solution option.
Step 2: Search for “ReactiveUI.XamForms”.
Step 3: Install it for all of our projects with each platform.




Create ViewModel 

ReactiveUI syntax for read-write properties is to notify Observers that a property has changed. Otherwise we would not be able to know when it was changed. 

In cases when we don't need to provide for two-way binding between the View and the ViewModel, we can use one of many ReactiveUI Helpers, to notify Observers of a changing read-only value in the ViewModel. 

RaiseAndSetIfChanged fully implements a Setter for a read-write property on a ReactiveObject, using CallerMemberName to raise the notification and the ref to the backing field to set the property.

ReactiveCommand is a Reactive Extensions and asynchronous aware implementation of the ICommand interface. ICommand is often used in the MVVM design pattern to allow the View to trigger business logic defined in the ViewModel
 
Let we create following property and command in the following ViewModel





using System;

using System.Threading.Tasks;

using ReactiveUI;

using System.Reactive;

using System.Text.RegularExpressions;

using System.Collections.Generic;

using System.Reactive.Linq;




namespace ReactiveUIXamarin.ViewModel

{

public class MainPageViewModel: ReactiveObject

{

private string _result;

public string Result

{

get => _result;

set => this.RaiseAndSetIfChanged(ref _result, value);

}

private string _username;

public string UserName

{

get => _username;

set => this.RaiseAndSetIfChanged(ref _username, value);

}

private string _password;

public string Password

{

get => _password;

set => this.RaiseAndSetIfChanged(ref _password, value);

}

private string _address;

public string Address

{

get => _address;

set => this.RaiseAndSetIfChanged(ref _address, value);

}

private string _phone;

public string Phone

{

get => _phone;

set => this.RaiseAndSetIfChanged(ref _phone, value);

}

public ReactiveCommand<Unit, Unit> RegisterCommand { get; }




public MainPageViewModel()

{

RegisterCommand = ReactiveCommand

.CreateFromObservable(ExecuteRegisterCommand);

}




private IObservable<Unit> ExecuteRegisterCommand()

{

Result = "Hello" + UserName + " Registration Success";

return Observable.Return(Unit.Default);

}

}




}

Create UI View:



ReactiveUI allows you to create views using two different approaches. The recommended approach is using type-safe ReactiveUI bindings that can save you from memory leaks and runtime errors. The second approach is using XAML markup bindings.
The following sample UI created by recommended approach using type-safe ReactiveUI .


<rxui:ReactiveContentPage
x:Class="ReactiveUIXamarin.MainPage"


x:TypeArguments="vm:MainPageViewModel"


xmlns:vm="clr-namespace:ReactiveUIXamarin.ViewModel;assembly=ReactiveUIXamarin"


xmlns:rxui="clr-namespace:ReactiveUI.XamForms;assembly=ReactiveUI.XamForms"


xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"


xmlns:ios="clr-namespace:Xamarin.Forms.PlatformConfiguration.iOSSpecific;assembly=Xamarin.Forms.Core"


xmlns="http://xamarin.com/schemas/2014/forms"


ios:Page.UseSafeArea="true">


<StackLayout>


<Entry x:Name="Username" Placeholder="Username"/>


<Entry x:Name="Password" Placeholder="Password" />


<Entry x:Name="Address" Placeholder="Address" />


<Entry x:Name="Phone" Placeholder="Phone Number" />


<Button x:Name="Register" Text="Register" TextColor="White" BackgroundColor="Gray" />


<Label x:Name="Result" />


</StackLayout>


</rxui:ReactiveContentPage>






ContentPage should inherit from ReactiveContentPage<TViewModel> and we are going to use ReactiveUI Binding to bind our ViewModel to our View.


Reactive binding is a cross platform way of consistently binding properties on your ViewModel to controls on your View.


The ReactiveUI binding has a few advantages over the XAML based binding. The first advantage is that property name changes will generate a compile error rather than runtime errors.


One important think needs to follows while Binding, always dispose bindings via WhenActivated, or else the bindings leak memory.





using ReactiveUI;


using System;


using System.Collections.Generic;


using System.ComponentModel;


using System.Linq;


using System.Text;


using System.Threading.Tasks;


using Xamarin.Forms;


using ReactiveUIXamarin.ViewModel;


using ReactiveUI.XamForms;


using System.Reactive.Disposables;





namespace ReactiveUIXamarin


{





public partial class MainPage : ReactiveContentPage<MainPageViewModel>


{


public MainPage()


{


InitializeComponent();


ViewModel = new MainPageViewModel();





// Setup the bindings.


// Note: We have to use WhenActivated here, since we need to dispose the


// bindings on XAML-based platforms, or else the bindings leak memory.


this.WhenActivated(disposable =>


{


this.Bind(ViewModel, x => x.UserName, x => x.Username.Text)


.DisposeWith(disposable);


this.Bind(ViewModel, x => x.Password, x => x.Password.Text)


.DisposeWith(disposable);


this.Bind(ViewModel, x => x.Address, x => x.Address.Text)


.DisposeWith(disposable);


this.Bind(ViewModel, x => x.Phone, x => x.Phone.Text)


.DisposeWith(disposable);


this.BindCommand(ViewModel, x => x.RegisterCommand, x => x.Register)





.DisposeWith(disposable);


this.Bind(ViewModel, x => x.Result, x => x.Result.Text)


.DisposeWith(disposable);


});


}


}


}

Output:


You can download the source code from github repository. When you run the application in an iPhone device, you will get the following output and click on Register, it will show the confirmation message like below screen.


Summary



This article has demonstrated and covers a little bit of what you can do when you combine the power of Reactiveui in Xamarin.Forms. I hope this article will help you to get started with awesome Framework.
Storyboards feature first introduced in iOS 5 that save time building user interfaces for iOS mobile apps. Storyboards allow you to prototype and design multiple view controller views within one file. Before Storyboards you had to use XIB files and you could only use one XIB file per view (UITableViewCell, UITableView or other supported UIView types).

A Storyboard is the visual representation of all the screens in an application. It contains a sequence of scenes, with each scene representing a View Controller and its Views. These views may contain objects and controls that will allow your user to interact with your application.

The storyboard is a collection of views and controls (or subviews) is known as a Content View Hierarchy. Scenes are connected by segue objects, which represent a transition between view controllers. This is normally achieved by creating a segue between an object in the initial view, and the connecting view. The relationships on the design surface.

The following image shows what a storyboard looks like, and it’s similar to the storyboard you’ll build the Storyboard in the end of the article.

Getting Started with iOS Storyboards in Xamarin.IOS

Create New Xamarin.iOS Application In order to learn the storyboard, let’s start creating a new Xamarin.iOSproject using Visual Studio 2019 or VS for Mac. When using Visual Studio 2019 on a Windows machine, you will need to pair the Mac machine. 

Open Visual Studio 2019 >>Create New Project or select "Open Recent Application" and the available templates will appear on a Windows like below. Select Xamarin.iOS app and click on “Next”
Getting Started with iOS Storyboards in Xamarin.IOS


Create a Empty iPhone/iPad Single View Storyboard Application

Getting Started with iOS Storyboards in Xamarin.IOS

The content of a storyboard is stored as an XML file. Storyboard files are compiled into binary files known as nibs. At runtime, these nibs are initialized and instantiated to create new views. After selection of single view application, solutions template will generate as below

Getting Started with iOS Storyboards in Xamarin.IOS


Main StoryBoard: Open Main.storyboard where you can find single view controller. Add one more view controller and update the view controllers as shown below.

Getting Started with iOS Storyboards in Xamarin.IOS


View Controller Segue: Let us now connect both the view controllers. Segue is used in iOS development to represent a transition between scenes. To create a segue, hold down the Ctrl key and click-drag from one scene to another. As we drag our mouse, a blue connector appears and select as Show/ Push as image below

Getting Started with iOS Storyboards in Xamarin.IOS



Change View Properties: Add Controls into the view controller and Select the properties, Change the styles as per your needs

Getting Started with iOS Storyboards in Xamarin.IOS


Output: When we run the application in an iPhone device, we'll get the following output and click on button, it will navigate to new screen.
Getting Started with iOS Storyboards in Xamarin.IOS



Summary: 


This article has demonstrated how to create a iOS Storyboards using Xamairn iOS. I hope this article will help you. Please leave your feedback/query using the comments box, and if you like this article, please share it with your friends.

Featured Post

Improving C# Performance by Using AsSpan and Avoiding Substring

During development and everyday use, Substring is often the go-to choice for string manipulation. However, there are cases where Substring c...

MSDEVBUILD - English Channel

MSDEVBUILD - Tamil Channel

Popular Posts