• No integration costs – robots drive existing applications.
• IT robots are “trained” by their users by being “shown” how to complete a task. This is akin to training a new employee.
• A robot once trained can scale across any number of other robots.
• The robot knowledge is extended and re-used over time.
• A robot is trained in the live environment making projects less expensive and much faster than traditional IT.
• Multiple robots applied to a task can be synchronised to deliver large-scale robotic platforms.
• Robots are universal application orchestrators – any application that can be used by a person can be used by a modern robot, whether mainframe, legacy, bespoke application, web service enabled or even a closed 3rd party API hosted service.
• Applications are “read” by the robot, either through dedicated APIs where they exist, through the OS prior to application display, or through the screen in the context of the native application. In this last case the modern robot “reads” an application screen in context and in the same way a user does. As part of the robot training it is shown how to read the application’s display much like a user is shown.
• Robots collect procedural knowledge which over time build into a shared library that can re-used by any other robot or device (in the same way objects are built in traditional SW engineering).
• Management information is gathered automatically as the robot operates. All processes generate statistical profiles as a by-product of doing the action. This allows tuning and development of a process in light of real data.
• Modern robots systems come with failover and recovery inbuilt as core capabilities. It means that if changes take place, or downstream failures occur a “smart” response can be trained into the overall system.
• Modern robots systems have full audit and security authorisation meaning that all changes and all access is recorded and regulated. Back-up process steps are managed, roll-back and recovery, as well process change-highlighting, are all automatically captured by the robot platform.
• Robotic Automation is principally aimed at clerical staff replacement as opposed to clerical staff acceleration as with BMPS. The philosophy of the approach is therefore to target routine, repetitive, rules-based tasks (procedures as sub-tasks within a larger business processes). Such tasks can often tie clerical staff down for long stretches of time. Very often such tasks are small, possibly involving 5-10 people, and so do not justify large IT, or even BPMS, projects to automate. The difference for robot automation is that no IT is required, and business users can “show” the robot what to do. The capability is therefore distributed to operations staff so as to divide-and-conquer many mid-to-small automation initiatives that would otherwise require people.
This difference in scale is illustrated with the so called Long Tail of Automation Requirements. This says that core IT deals with the high volume bulk processing requirements an organisation may have. Typically, these are core ERP systems, mainframe accounting and core data bases. As we move towards the middle of the graph requirements become more specialist and diverse. This is where an organisation often differentiates its product and service offerings. Typical technologies here are workflow, desktop integration, BPMS, agent acceleration. These are large IT control programs that service to offer a platform for automation and work management.
Finally we have the third section of Long Tail – these tasks are characterized by their diversity. Often they are too diverse to make an IT change program, and may be too small to justify IT project costs. Here traditional approaches have been to outsource, or offshore in order to adjust labour rates to make the task more competitive. Robotic automation offers an alternative to off shoring or outsourcing – presenting a new cost-band of labour based on robots.
• Robotic Automation is normally housed, monitored, licensed and controlled by IT, or at the very least a centralized governance body. This group enforce a central usage policy configured within all robots.
• Robotic processes are accretive – objects are built and are then available for re-use across the business. This allows disparate groups to all share and build common resources supporting much greater resource re-use than many SW environments.
• No new data – best practice robot discourages or even forbids the creation of new data. Systems are used by robots as they appear to users so as to coordinate and streamline enterprise governance.
• Robotic FTE’s are 1/3 of the price of off-shored FTE’s and can work 24/7
• Speed to automation – days and weeks to automate clerical procedures
• “Self Build” – no need for specialist IT, robots are trained by end-users
• Robots are trained to do repetitive clerical tasks and drive existing applications so no costly integration and expensive process re-design expertise needed
• A small specialist team from the business operations works alongside robots to train them, manage exceptions and continually improve the robots operational performance
• MI is automatically captured across all procedures operated.
This coupled with a wide range of dedicated tools that have been developed means that we are confident in being able to link any system with the click of a button. This proven application orchestration capability ensures that new processes can be rapidly designed, built and tested without any impact on existing systems.
Blue Prism gathers data and integrates processes at an abstracted level using a variety of techniques an interfaces that ensures underlying systems are not impacted.
Permissions to design, create, edit and run processes and business objects are specific to each authorised user.
A full audit trail of changes to any process is kept, and comparisons of the before and after effect of changes are provided.
The log created at run-time for each process provides a detailed, time-stamped history of every action and decision taken within an automated process.
Our clients tend to find that running a process with Blue Prism gives them a lot more control than a manual process, and from a compliance point of view assures that processes are run consistently, in line with the process definition.
Our audit viewer allows users to track the details of who, when, why and exactly how a process was changed.
In addition, detailed logs are held of every step taken during execution, providing a robust and detailed audit trail.