Drag and drop between multiple Trees in Jquery using jstree

This was a fun project that came about because of a lack of foresight in the design process. I had argued with the Business Analyst that this project would take a substantial amount of time to complete, and that it may be easier to have a simpler control, such as 2 list controls, or one list that contains the initial groups, and one or more with the child objects, which in my case was substations and generating units. But, the B.A. pushed back and insisted on having 2 identical tree controls with multiple levels of Child objects, (of course shown by a different icon for each). I did research to find out exactly what kind of control I could simply reuse, as I thought there had to be an implementation of drag and drop between 2 tree controls! But, of course there wasn’t. It’s trivial to implement drag and drop between 2 tree controls, but only with the exact same objects between the 2. If you wanted to have more than one ‘level’, there was no such control that existed anywhere, at least at that time (a few years ago now). Also, we were restricted to only using free controls, such as Jquery plugins, and such. Other commercial products also didn’t implement what I wanted, such as Telerik or Infragistics. So, I decided on jstree (www.jstree.com), which looked great at least. But, of course, there was no support, and not a lot of developers were using it. But, it did have events that detected drag and drop between different instances of jstree, which at least was a good starting point. By the way, the advice I got from other ‘developers’ on our team was that this little project was quite impossible, and to just give up and implement it in a trivial matter, using lists. But, I was determined so I forged ahead, and got to work with jstree.

My solution used recursion, which seemed like the easiest way to accomplish what I needed. As, if you actually think about this, before starting work on it, (sadly not a lot of developers actually think about design, and logic before coding these days), you need to implement logic to prevent drag and drop of nodes on a different ‘level’ of the current object that you are moving. And, then of course, you would have to delete all the dragged objects from the source tree, and from the database (in certain cases anyways), so it’s quite involved to implement all of this correctly, and to have it done in a few weeks which was my time that I estimated! The final solution was also much more involved than this demo, as it included more objects to ‘wrap’ around the basic solution, including ‘maps’, which would allow the user to custom make maps, with geographic locations, as a starting point for creating a destination tree ‘map’. And, other engineering terminology, which I won’t go into detail on here. A project like this is a good way to determine how a developer works. Any real developer would push back on a Business Analyst’s flawed logic, and incomplete design, especially in this case. And, at the very least, understand how it would affect your existing project, and whether it makes sense to incorporate something like this into the existing system, or not. So, something like this project would be a good way to weed out poor developers from good ones. A good developer can actually produce a great looking and functioning UI for the user. If they can’t, you should reconsider what their contribution to the project actually means. It’s very easy to just be good at one portion of the ‘stack’ such as web services, or database development, but it’s many, many times harder to actually create an entire project from scratch, and have an incredibly complex UI system working perfectly. As, if you think about it, you would realize that every level such as the database project, web services project, and of course the UI project would have to work perfectly in order to be useful to the user. So, if you are hiring a new developer, this could be the perfect project to give them in the first few weeks, as the few hours you spent interviewing them, doesn’t really determine if they are a competent developer when working on a real world problem.

Now back to the jstree solution: To get this running, the basic HTML for the placement of the controls is as follows:

You can just place a div wherever you want a tree control to appear on your form. So for side by side controls, wrap 2 divs within another div. A basic implementation of Jstree is as follows:

This creates the actual tree in the divGroupingsTree div. Then loads all the nodes that you want to display from the url (“/Services/CutPlaneManagement.asmx/GetAllElectricalGroupsSubstations?…) The next step is to bind whatever events that you need for each tree. In my case, it’s the move_node.jstree event, which fires whenever a user moves a node to a new location either on a different tree or the same tree. This code fires when the tree detects a node that has been moved:

This also creates a list, and ensures that all items are moved to the appropriate parent at the destination tree. The next portion of the code to examine is the call to ‘updatesubstations’ which actually updates the child nodes in the database itself. This code is taken from the ‘added groups’ tree creation code:

The web service call basically updates the child list per map, as we are using a map id to group each ‘mapping’ of child objects:


The next part of the code recreates the hierarchy and returns the tree object to jstree:

Jstree then uses this hierarchy to automatically recreate the parent child hierarchy as reflected by the latest actions of the user:

Next, is the javascript code to recursively move the nodes into the destination, and then delete any duplicate nodes / subnodes from the source tree:

The actual function that recursively moves nodes is moveNodes(), which is rather long, and complicated:

This function basically tests for the amount of parent objects to move, so if it’s greater than 1, then it will eventually recursively call itself, to accomplish the moving of multiple parent/child levels below the selected node that the user actually moved. Now, the last part I wanted to show you was the createSubstationList() recursive call, which builds the list of current substations (child objects) that are in the tree after the move was completed. This, list is then used by the ‘UpdateSubstations()’ server side call to actually update the list of substations in the Database itself:

This is a fairly straightforward recursive function that basically searches for any children within the parent node, and recursively traverses the html tree to retrieve all substations (our child object name in this case), and add to the substlist parameter.

Here is the actual demo project running. In this case, I demonstrate before and after screenshots of moving nodes from within the available ‘map’ to the destination or ‘added’ map. The first is to move a parent folder (Group) along with all child node(s) to the main root folder, and see if it will be smart enough to create all parent groups in the destination, along with the newly moved groups and ‘substation’ child nodes(s) of course! So, I will drag the ‘Edm – Edmonds’ folder from the Available to Added tree and drop it inside the ‘B.C. – British Columbia’ folder, and see what happens:


Dragged to Destination tree:

I simply clicked on the ‘Edm – Edmonds’ folder and dragged it to just under the B.C. folder and dropped it there. You can drop it anywhere that it makes sense, such as under the ‘GVRD – Lower Mainland’ folder as well.


Yay!, it worked! You can see that it was smart enough to create both the ‘Bby – Burnaby’ folder, then the ‘Edm – Edmonds’ folder, and place the newly moved child node in the ‘Edm – Edmonds’ folder, which is exactly what we wanted, (or at least what our engineers wanted in this case.). So, it should work in a similar fashion if you drag a child node, lets say ‘East Van 1 – East Vancouver Substation 1’ from right to left, and drop it in the B.C. root folder:

Initial State:

While Dragging from right to left:

End Result:

So, in this case, the ‘East Van – East Vancouver’ folder existed in both trees, with one unique child node in each folder. I dragged from right to left, and it successfully placed the ‘East Van 1 – East Vancouver Substation 1’ child node in the destination folder, and then deleted the source folder, as there was no reason for it to exist any longer. Of course, you can also drag whatever folder or node you want back and forth, and it should successfully create and delete parent folders accordingly, and in a way that makes sense.

In my most recent articles, I demonstrated the use of Telerik controls, and how to get them to actually work when you wanted custom functionality, versus just using them ‘out of the box’. Of course, you can see there is a trade-off for either approach. If you are using a commercial product, or other prebuilt control, which does most of what you want, usually your development time is substantially reduced, but not always! Sometimes, you need to spend quite a bit of time learning the ins and outs of the control you have decided to use. This article demonstrates an approach that is using a basic tree control, that does not have any of the functionality needed, except of course for the tree control itself. Most likely, using a commercial product, would save some time in this case, but it was an enjoyable experience, and I wanted to demonstrate what can be accomplished using this type of approach. This was also an exercise in Jquery / javascript, as it’s one of my first attempts at writing javascript and consuming a jquery plugin. This project, can easily be used as a starting point for your project, that may use a graphical approach to mapping objects in your organization in the form of a tree, and includes some nice functionality to visually map out relationships for the user. In our case, it was actual geographical locations, that the user who is familiar with the region can relate to. Also, this sample was taken from an extremely complex project, that had support for multiple geographical source and destination maps. You also may be wondering why you can’t actually add nodes by right clicking the context menu. I also did create that portion of the project, but in this case, the demo is to take a pre-existing map of a Province (in our case), and to create a ‘cut’ of the map for use by engineers and technologists in the organization.

I hope you have found this project useful, and can use this in one of your future projects, or can at least appreciate the complexity of the problem, and ultimate solution. I have also included the complete source code, that you can download and re-use if you like.

Source code

Implementing ‘Batch Mode’ using Kendo Upload Control for ASP.NET Core

I initially used the upload control for a basic upload only control on a form. And, that’s exactly what it is meant for. However, I wanted it to work in a kind of ‘Batch Mode’. What I mean by that is to only upload the files to the browser itself, and not automatically post the files back to the server when the user uploads a file. And, I didn’t want the file to be deleted instantly when they remove a file from the uploaded files list either. However, this turned out to be more difficult to implement than I thought. I had a ‘catch 22; situation occurring: If you used async mode (this is the mode that of course calls server side functions upon either file upload ot file removal.), I could get an initial files list to appear, so upon the user opening the form, they could see which files they uploaded previously, which is great. But, that’s not exactly what I wanted, as in async mode, the files are added or removed when the user uploads or removes. So, I had to write quite a bit of custom javascript, and basically trick the upload control into believing it was in async mode, but not uploading the files instantly, which I managed to get working with the following code:

Notice the .AutoUpload(false) statement. That causes the Upload control not to fire the SaveAttachment() and RemoveAttachment() server side methods. But, this creates another problem which is that you now have 2 big autogenerated buttons which then will allow the user to upload or remove the current file at a later time. I didn’t want this to happen, as I wanted to submit the entire form, along with the IFormFile object, which contains the file that user uploaded. So, I wrote some custom CSS to get rid of those ugly buttons!

This got rid of those buttons!, But, now, there’s still a huge amount of javascript to write, as I wanted the previously uploaded images to appear as thumbnails, when the form loads initially. Without this extra code (that I’m about to show you), the initial files list will still populate, but inconveniently without thumbnails. So, essentially the idea is to iterate through all of the files added to the files property of the control itself, in javascript, retrieve the image and display as a thumbnail, and then replace the existing default view in the files list. loadImages() is called on document.ready:

Your view model would contain a base 64 string version of the actual file:

And, the file can be any form of a byte[] or stream. You simply convert that in your view model, in order to easily display it in the files list. So, once you have your base 64 version of the file, simply add an <img> tag with your actual file before the default telerik <span> tags and now you have thumbnails:

Now, the upload control renders correctly, and you have a nice image preview, which I think is a minimum for this scenario. Telerik doesn’t have an image preview option, so unfortunately that’s the only way to get it to work.

Now, there is still the issue of creating the logic to handle new file uploads, and render the image previews in the file list properly. That is essentially the same as the javascript code above, except that it is fired using the onSelect event of the upload control. And, I added a Boolean flag to handle deletion of files, so that is passed back with the view model on form submit. The function is fired using the onRemove event:

This will then conveniently be passed to the server side function on form submit, and you can simply create custom logic on the server to handle all 3 cases:

  1. The user did nothing, so the File property is null, so you don’t have to do anything.
  2. The user added a file, so the File property contains the new file that was uploaded.
  3. The user removed a file by clicking the “x” beside the thumbnail so you check the PerformDelete flag, and delete the file if it is true:


Also, make sure to include your ‘dummy’ async methods, otherwise, the async functionality will simply not work, and you will be wondering why not:

Now, our Batch mode upload control is complete. It performs all the functions I wanted it to, including:

  1. Initial Files list WITH Thumbnail Image Preview
  2. Auto population of View model with users uploaded file
  3. Batch mode operation, with file addition and deletion only occurring on form submit
  4. New file upload image preview generated as users add and remove files automatically

Although, it took a bit of extra effort, than just dropping the Kendo Upload control on a form, I have exactly what I was looking for. I’m sure this will be a standard requirement for those working with images, versus the stripped down functionality of the basic upload only control. There are also many demos on the telerik site: https://demos.telerik.com/aspnet-core/upload/index,   But, they all miss the mark in terms of Batch mode functionality. I’ve also included the source code on git hub so you can reuse all the code in your own projects!

Source code

Implementing an AutoComplete Control on ASP.NET Core using Telerik

Implementing an AutoComplete Control on ASP.NET Core using Telerik

I was looking for an “autocomplete” solution that would allow the user to type any value in for a username search, and not have the web application bring back so much data that it slows down the app. So, the solution I had initially worked with was Telerik autocomplete for ASP.NET Core. It was fairly straightforward to get client or server filtering working, but the big disadvantage with that is that every single item from the search results will be returned, potentially resulting in a huge amount of data sent at once, which is never good. You could also directly code the paging mechanism into your controller, which will only bring back a page at a time, but there is an easier way! There is a virtualization option, but according to the demo on Telerik’s website, there are many other functions besides the main controller action to return all the data from a particular search. The sample code shows many functions that are not written that you would potentially have to write:

The value mapper controller action is missing from the demo, and it is not obvious what you need to write. This also may not work on ASP.NET core, as there is no obvious way to send the Request Verification Token back to the controller, (if you use a POST request.) But, according to the documentation for virtualization, the implementation of the valueMapper function is now optional, and is only required when the autocomplete widget contains an initial value or if the value method is used. So, now we can simply create our HTML markup as follows:

The controller code is basically the same as the ASP.NET Core demo code, so, no additional changes have to be made to accommodate virtualization, as that will be handled by Telerik ajax code. It will essentially pass the correct parameters into your controller code, to only retrieve what ever page of data (or less) that it needs at any given point in time:

Now, you have complete markup and controller code, and can begin testing the behaviour and performance of the autocomplete functionality. It is interesting to compare it with only the Server Filtering option enabled, where when the autocomplete controller action is called, the entire dataset is returned (unless of course as I mentioned previously, you want to implement custom paging on the server side). So, with a potentially large user database, this won’t be a practical solution.

This is the same markup, but with only server filtering enabled, no virtualization. So, as you can see on each call, the entire dataset is returned:

Now, lets see the big difference with virtualization enabled for autocomplete:

The first few requests are the controller returning a particular page (or less than a page), per request. So, you can see a substantial reduction in the amount of data returned, which will increase the autocomplete responsiveness, and in turn the user experience.

So, as you can see, the new simplified virtualization feature on Telerik’s Autocomplete function is fairly simple to implement (if you have the patience to take their incomplete demo code, and also read most of the documentation) You definitely need to have a good understanding on how to actually implement the functionality, so you can fine-tune it if necessary. This is also an ideal solution for applications with very large datasets that need to be searched on demand, and yet provide a responsive user interface.



Source code is HERE



Debugging ASP.NET Core applications within IIS

I have been working with ASP.NET core for a while now and always missed the direct IIS support in Visual Studio. Having to remember to spin up the project to start IIS express is a bit of a nuisance. When developing software, we want the actual debugging and run processes to be as automated as possible, and with IIS express, they simply aren’t.

It is much quicker to simply launch a browser and debug JavaScript instantly, without an extra step in making sure that the IIS express site is actually running. And, no need to start and stop your website, making development that much quicker.

Essentially, the goal is to have your web server running 24/7, without having to think twice about it. So, the first step is to actually enable IIS on your development machine:


Enable IIS

  1. Navigate to Control Panel > Programs > Programs and Features > Turn Windows features on or off (left side of the screen).
  2. Select the Internet Information Services check box:




The next step is to configure IIS and ensure you have an SSL certificate setup to run your site securely in the browser. If you’ve already installed IIS previously, simply add an HTTPS binding to allow https on your default web site


Configure IIS

The Host name for out new website is set to “localhost” (the launch profile will also use “localhost” within Visual Studio). The port is set to “443” (HTTPS). The IIS Express Development Certificate is assigned to the website, but any valid certificate works:




The first 2 steps are straightforward, and are the same no matter if you are using .NET framework or .NET core in your applications. I have managed to debug with IIS using Visual Studio 2017. So, I highly recommend that you install Visual Studio 2017, if you haven’t already.


Next, we have to enable development time IIS support in Visual Studio:


Enable Development-Time IIS support in Visual Studio 2017

  1. Launch the Visual Studio installer.
  2. Select the Development time IIS support component. The component is listed as optional in the Summary panel for the ASP.NET and web development workload. The component installs the ASP.NET Core Module, which is a native IIS module required to run ASP.NET Core apps with IIS:


Now, we can finally create a new ASP.NET Core application in VS2017. Well, not quite yet! I had followed several articles, both from Microsoft and other developers, but they were all missing the key component: ASP.NET Core 2.2. Don’t use 2.1 or any other version. I couldn’t actually get my application debugging within IIS, without 2.2. But, that’s the main reason I write an article like this. Instead of going through other articles, that don’t cut it, I learn what I can from them, and create a better article that actually gets developers where they need to be, without leaving important information out.

You can download .NET core 2.2 here: https://dotnet.microsoft.com/download/dotnet-core/2.2


Now, that you’ve got .NET core SDK 2.2 installed, we can finally create a new project:


Create New ASP.NET Core 2.2 project

Make sure to select the check box to Configure for HTTPS when creating a new project:




Next, we need to configure the debug tab within our new project. This involves setting up a launch profile to launch IIS correctly:


IIS launch profile

Create a new launch profile to add development-time IIS support:

  1. For Profile, select the New button. Name the profile “IIS” in the popup window. Select OK to create the profile.
  2. For the Launch setting, select IIS from the list.
  3. Select the check box for Launch browser and provide the endpoint URL. Use the HTTPS protocol. This example uses https://localhost/TestIISWithCore.
  4. In the Environment variables section, select the Add button. Provide an environment variable with a name of ASPNETCORE_ENVIRONMENT and a value of Development.
  5. In the Web Server Settings area, set the App URL. Set it to the same as the URL you entered in Step 3.
  6. Save the profile:



You should now be able to debug your application with IIS. Make sure to set your build configuration to Debug, and the profile to IIS. Then click the run button to start your application:




There you have it. You can now officially debug your ASP.NET Core apps within IIS. Of course, this is still a matter of personal preference, I always preferred debugging my apps within IIS instead of IIS express.

Creating a Side Menu for ASP.NET Core using a View Component

While developing our new web application, we wanted to add a menu component that is dynamically generated based on the current route and parameters.

I initially looked into the concept of partials in ASP.NET Core, and while these are great for reusing static markup, they’re not so great for building dynamic, data-driven content such as a dynamic menu.

Where your requirement is to reuse dynamic and / or data driven content, then the correct design approach is to use a ViewComponent. From the Microsoft documentation

After looking at Partial Views, and View Components, I found that ViewComponents don’t have to depend on data already existing. For example, you can simply make an asynchronous call to a server side method like so:

@await Component.InvokeAsync(“MenuItems”, 1234)

According to the Microsoft documentation, view components are similar to partial views, but they’re much more powerful. View components don’t necessarily use model binding, and only depend on the data provided when called. A view component:

– Renders a chunk rather than a whole response.
– Includes the same separation-of-concerns and testability benefits found between a controller and view.
– Can have parameters and business logic.
– Is typically invoked from a layout page.

View components are intended anywhere you have reusable rendering logic that’s too complex for a partial view, such as:

– Dynamic navigation menus

Tag cloud (where it queries the database)

– Login panel

So now, our menu tree structure is handled by a ViewComponent. All the business logic for building a user-specific menu is contained within the ViewComponent, and the ViewComponent returns the menu tree structure. This is then displayed by the Razor Page that calls the ViewComponent. When you call a view component method, you don’t have to pass parameters, and you don’t have to pass a view model. However, with Partials, you need to pass data (a view model), at the time you want to render the Partial view, so you need to have your data ready before hand, making it tightly coupled to your existing view(s).

There are also many other benefits such as:

Encapsulate the underlying business logic for a Razor Page in a separate component
– Allows the business logic to be unit-tested
– Allow for the UI component to be reused across different forms, essentially acting like an independent view
– Leads to cleaner code with separation of concerns

Here is the View Component itself:

The ViewComponent calls other methods to actually generate the menu. In this case it’s just a list of Parent -> child mapped menu objects which is included here for your reference:

Once you have your ‘hierarchical’ list of Menu items, just return it to your view as shown inside the InvokeAsync() method. In this case, the view, just recursively displays the menu items according to the parent child relationships within your menu records:

This is returned wrapped inside an instance of IViewComponentResult, which is one of the supported result types returned from a ViewComponent.

This is what the call to invoke the ViewComponent from the layout page looks like:

@await Component.InvokeAsync(“Menu”, new { cbaId = ViewBag.Id })

I’ve used the ViewBag in this case to pass an ID to the ViewComponent, so there is some context of what to display. You can also see within the ViewComponent that I retrieve the current Route, and use it, along with the Id above to determine what menu items to load.

The end result looks like this:

You can see the ordering is correct, and along with using Font Awesome icons, a nice collapsible menu is created. Clicking on the Create New button runs custom Javascript to most likely create something:

The rest of the items are route Url’s, all created from the MenuHelper class. I used the basic ASP.NET core application as a starting point and ‘wrapped’ my menu components around it.

I created the entire dynamic menuing system very quickly. And, it’s essentially independent of other views and partial views within your system. This can be reused simply by cutting and pasting all of my code into your application. This is a real world example that solves very real problems with modern application development. Having independent easily testable modules like this ensures your applications are far less likely to cause issues in a production environment.

Source Code


Integrating SharePoint with OutSystems

By Chris Johnson @ kolaberate.com

Integrating SharePoint with Out Systems


This article is about accessing your Office 365 SharePoint application via the SharePoint API to fetch or update its resources specifically using REST API services from OutSystems. You are probably reading this article because you want to get access to your existing company intranet, which is created in SharePoint, but within your new OutSystems application. Now, with the SharePoint API, you can access all the resources, in the same way you would in ASP.NET, or any other development language that supports REST API access, for that matter. I am going to use a tool called PostMan in order to demonstrate how a REST API works, and how you can get access to the basic operations of your SharePoint site.

Postman Tool
This is a developer friendly tool for handling the REST APIs from any platform. By using this tool we’ll retrieve and update any information from SharePoint using REST API endpoints. We can get this here: PostMan Download Link.

Postman & SharePoint Rest endpoints
If you are new to SharePoint REST API or you want to know more about REST endpoints in SharePoint; visit the link Get to know the SharePoint 2013 REST service.

Now that you have at least some understanding about the PostMan tool & SharePoint Rest API endpoints, we’ll start testing the SharePoint REST API with this tool.


Let’s take a simple scenario like, retrieving the web title from the current site context. The equivalent syntax for retrieving the website’s title is


After entering the above URL in the text-box in the URL text-box. We will simply receive an Unauthorized exception. That is because SharePoint Online is very much secure and doesn’t simply allow anonymous users to access the information on the site. Below is the error message response, after sending the request:

Figure 1

To avoid an Unauthorized exception, we need to add some request header values to the API request. Authentication and Authorization of SharePoint Add-Ins gives an overview of authorizing the Add-ins to access SharePoint resources by the APIs.

Authentication Policies:

SharePoint online considers any one of the below three type of polices to authenticate the Add-In.

  • User Policy
  • Add-In Policy – We are using this policy to authenticate the external system to access SharePoint
  • User +Add-In Policy

Request Headers:

And, we require the following information in various requests to authenticate with SharePoint online site.

  • Client Id
  • Client Secret
  • Realm (Tenant Id)
  • Access Token

Authorize Application to access SharePoint

To get authorization from an external system, we should pass access-token value as a request header along with the REST API URL. Before that we have to get the access-token. And in order for us to obtain an access-token, we should generate a Client Id and Secret information from the site by registering as an App only Add-In in our SharePoint site.

I have provided the steps below to get the Tenant Id, Access Token and data from SharePoint using our trusty PostMan utility.

Register Add-In

First, we have to register the Add-In in SharePoint, where we want to access the information. Follow the steps below to register the Add-In in your SharePoint site:

  • Navigate and login to SharePoint online site.
  • Then navigate to the Register Add-In page by entering the URL as


  • On App Information section, click Generate button next to the Client Id and Client Secret textboxes to generate the respective values.
  • Enter Add-In Title in Title textbox
  • Enter AppDomain as a localhost
  • Enter RedirectUri as a https://localhost

Figure 2

  • Click Create button, which registers the add-in and returns the success message with created information.

    Figure 3: Add-In Registration Successful

Grant Permissions to Add-In

Once the Add-In is registered, we have to set the permissions for that add-in to access the SharePoint data. We will set the Full Control permission level to the web scope, so that we will be able to read and write to the SharePoint site.

  • Navigate to your SharePoint site
  • Then enter the URL https://<sitename>.sharepoint.com/_layouts/15/appinv.aspx in the browser. This will redirect to the Grant permission page.
  • Enter the Client ID(which you generated earlier), in AppId textbox and click Lookup button. That will populate the values in the Title, App Domain and Redirect URL fields.
  • NOTE: Make sure to Enter the exact same text in the Permission Request XML field as in Figure 4 below:

Figure 3: Set Permission for Add-In.

Then click the Create button. This will then display the confirmation page where you confirm that you want to trust your newly created Add-In. Click the ‘Trust it’ button to continue:

Figure 4: Confirm Add-In permissions

Connect OutSystems application to SharePoint

Now, we can finally use our OutSystems application to connect directly to SharePoint! Since the Add-In is now registered, we can now use it to retrieve the Tenant ID, which is then used to Generate an Access Token. The Token is usually valid for a limited amount of time, so it would be easier just to generate a token every time you want to perform an operation with SharePoint, unless you have a substantial number of operations every time your OutSystems application runs.

I also created an OutSystems project, that is used for the article. It is located in the OutSystems forge (https://www.outsystems.com/forge/). Simply search for ‘Sharepoint Connector’.

Retrieve the Tenant ID

Once we have registered the Client Id and Secret with the permissions, we are ready to access SharePoint from our OutSystems application.

First, we need the Tenant ID. This is accomplished by calling the GetClient method in the OutSystemsSharepointGetTenantId REST API service as shown:

Figure 5: OutSystems GetClient REST API Function

As you can see from figure 5, there is an ‘Authorization’ parameter. The test value is ‘bearer’, and that’s what is passed by the calling Server Action. This does not authorize the request, but simply returns the Bearer realm and client_id as part of the WWW-Authenticate header:

Figure 6

Note that the client_id is actually a global resource id for SharePoint itself. Don’t confuse it with the ‘AppId; you created for your add-in previously.

Generate the Access Token

These attributes are now used to actually generate the access token. We now need to create a POST API method with the URL:


to actually retrieve the access token. The preparation action of the ‘SharepointTest1’ web screen actually contains all the logic for retrieving the access token, (if expired), and using that token to retrieve and create objects within your SharePoint site. The first action that encapsulates both retrieving the tenant and generating the access token, if necessary, is GetAccessToken Server action:

Figure 7: Retrieve Tenant Id and Access Token

The GetAccessToken Server action calls the GetClient method first (Figure 5 and 6), to retrieve the Tenant Id (Realm in Figure 6), and ResourceClient Id (Client_id in Figure 6). Then, to acquire a new token, the BuildAccessTokenRequest server action is called to form the request body:

Figure 8: BuildAccessTokenRequest Server action

Now, that the Request Body is created, this is passed into the PostOauth REST API method to generate the actual token:

Figure 9: Request new Access Token

Once, the request has been posted, the response, should contain the new token:

Figure 10: Bearer Token response

SharePoint REST API methods

Now, that we finally have our token, we can now actually access our SharePoint site via specific REST API commands. The GetSiteInfo is the first REST API call that retrieves information from your actual SharePoint site:

Figure 11: Get Site Title

This simply retrieves the site title, and metadata as follows:

Figure 12: Request site title response

The next server action that we will look at is ‘CreateTestFolder’. This action calls the CreateTestFolder REST API method to actually create a folder. I have chosen to create a folder under the ‘Share Documents’ folder called ‘Folder A’:

Figure 13: Create a new folder in ‘Shared Documents’

Once the post occurs, the folder is successfully created on the main team site Documents folder:

Figure 14: Folder creation successful!

As you can see, the folder was created successfully. Next, lets take a look at the file creation REST API method. This method simply creates a file within the newly created FolderA that we just created:

Figure 15: Create new file within folder

The request body contains the actual file contents, in this case a text file. When the POST is sent to the server, as you would expect, a new file is created:

Figure 16: File successfully created

Now, to verify that it actually worked, and to read the contents of the newly created text file, the last server action is ‘GetTestFileContents’ which in turn calls the REST API method with the same name:

Figure 17: Read Contents of newly created text file

And, then when this GET method is executed, as you can see from the console debug window, the file contents match what was originally created:

Figure 18: Read contents of newly created file.


This concludes my demonstration of how to integrate your existing SharePoint Office 365 tenant with your OutSystems applications. As you can see, the Postman utility was very useful in helping test and create the Add-In and REST API methods used to communicate with your SharePoint site. If you are new to WEB API, hopefully you have learned all the basics of creating WEB API methods as well. OutSystems, while seemingly easy to use, is not recommended for learning any complex programming methods such as REST. Postman is very powerful and is easy to learn, and very useful for debugging your web service calls. So, I would recommend starting off with Postman, as I did at the beginning of this article, as it makes your API method creation much easier.