Ammar Ahmad, Author at Confiz https://www.confiz.com/author/ammar-ahmad/ Sun, 29 Jun 2025 21:15:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://www.confiz.com/wp-content/uploads/2023/07/favicon.png Ammar Ahmad, Author at Confiz https://www.confiz.com/author/ammar-ahmad/ 32 32 Model Context Protocol for Dynamics 365 Finance and Operations and Copilot Studio: A complete guide https://www.confiz.com/blog/model-context-protocol-for-dynamics-365-finance-and-operations-and-copilot-studio-a-complete-guide/ Tue, 24 Jun 2025 14:22:06 +0000 https://www.confiz.com/?p=9424 Modern AI applications increasingly require seamless access to enterprise data and tools across diverse environments. As organizations build intelligent agents at scale, it becomes essential to standardize how these systems connect to business logic, invoke tools, and adapt to evolving scenarios. This is where the Model Context Protocol (MCP) proves invaluable.

In this blog, we explore Model Context Protocol MCP for Finance and Operations apps, and how it enables scalable, intelligent integrations, especially within You’ll gain a clear understanding of what is Model Context Protocol, its architecture, real-world use cases, and how to build and deploy your own MCP server using Copilot Studio.

What is Model Context Protocol (MCP)?

The Model Context Protocol (MCP) is an open standard that connects large language models (LLMs) to external tools, business data, and APIs in a structured and uniform way. Often referred to as the “USB-C for AI agents,” the MCP Model Context Protocol eliminates the need for custom integration logic, enabling plug-and-play AI connectivity.

What are the main benefits of using the Model Context Protocol?

The Model Context Protocol (MCP) is designed to facilitate efficient and accurate interaction between large language models (LLMs) and external systems or applications. Here are the main benefits of using the Model Context Protocol:

  • Unified access to business logic and data across multiple finance and operations applications.
  • Cross-platform agent reuse for streamlined development and maintenance.
  • Tool interoperability, allowing tools to be accessed from any MCP-compatible agent platform.
  • Simplified development experience, reducing overhead for building and connecting intelligent agents.

Overview of the MCP architecture

The Model Context Protocol architecture is based on a modular client-server model. The main components include:

  • MCP Hosts
  • MCP Clients
  • MCP Servers

These elements work together to enable flexible, standardized communication between AI agents and enterprise systems. Below is a Model Context Protocol architecture diagram to illustrate the high-level data flow and interaction.

Extending Dynamics 365 F&O’s capabilities using MCP

Dynamics 365 F&O contains extensive business logic and data, making it ideal for LLM-based copilots. By building an MCP server over your D365 APIs or databases, you can create agents that read data, trigger actions, and answer complex business questions.

Read more: Key features to explore in Microsoft Dynamics 365 Finance and Operations in 2025

Prerequisites

Before using the Dynamics 365 ERP MCP server, ensure the following versions are met:

  • Finance and Operations apps version: 10.0.44 (10.0.2263.17) or later
  • Copilot in Microsoft Dynamics 365 Finance: 1.0.3049.1 or later
  • Copilot in Microsoft Dynamics 365 Supply Chain Management: 1.1.03046.2 or later

Introducing the default MCP server in Microsoft Dynamics 365

Microsoft Dynamics 365 ERP now includes a built-in MCP server. This server exposes tools from Dynamics 365 Finance and Operations applications to agent platforms that support MCP, enabling the following capabilities:

  • Agent access to data and business logic across multiple apps
  • Reuse of agents across ERP systems
  • Tool interoperability across any MCP-compatible agent platform
  • A simplified agent development experience

Using the Dynamics 365 ERP MCP Server in Copilot Studio

You can use the Dynamics 365 ERP MCP server to create agents in Microsoft Copilot Studio, too. The server provides tools that enable actions within Dynamics 365 Finance and Supply Chain Management.

Integrating the MCP server Copilot Studio workflow is straightforward:

  • Open or create an agent in Copilot Studio.
  • Navigate to the Tools tab and select ‘Add a tool.’
  • Filter by Model Context Protocol and search for ‘Dynamics 365 ERP MCP.’
  • Create a connection and add it to the agent.

Once connected, the agent can leverage all tools made available through the server to interact with your finance and operations data.

Read more: How to build your own copilot in Microsoft Copilot Studio?

An overview of Dynamics 365 ERP MCP tools: What’s available?

The Dynamics 365 ERP MCP server includes a static list of predefined tools. Each tool is backed by a custom Dataverse API, which defines its schema and performs the operation. You can find these APIs in the corresponding Dataverse solutions:

  • Copilot in Microsoft Dynamics 365 Finance
  • Copilot in Microsoft Dynamics 365 Supply Chain Management

Each tool includes:

  • A description of its purpose
  • A schema definition via the Dataverse custom API
  • A list of input parameters
  • A set of expected outputs

Here are some tools:

1. Find approved vendors

  • Name: findapprovedvendors
  • Purpose: Retrieves vendors approved to supply specific items
  • Input Parameters:
    • ItemNumber (String, Optional) — The item ID to filter approved vendors
    • vendorAccountNumber (String, Optional) — If provided, limits the result to this vendor
  • Use Case: Used by procurement agents to validate sourcing.
  • Output:
    • Itemnumber (String) — The item number.
    • approvedvendoraccountnumber (String) — The vendor account number of the approved vendor for the item.
    • validfrom (datetime) — The date and time from which the approval is valid.
    • validto (datetime) — The date and time from which the approval is valid.
  • Custom API: msdyn_FindApprovedVendors

2. Create a transfer order for a single item

  • Name: createtransferorderforsingleitem
  • Purpose: Creates a transfer order for a specified item.
  • Input Parameters:
    • ItemNumber (String, Required) — The item code to transfer.
    • fromWarehouseId (String, Required) — Warehouse ID from where the item will be shipped.
    • toWarehouseId (String, Required) — Warehouse ID to which the item will be sent.
    • quantity (int, Required) — The number of items to transfer.
  • Use Case: Used in inventory management agents to automate internal stock movements.
  • Output:
    • result (String) — A message indicating the result of the transfer operation.
  • Custom API: msdyn_CreateTransferOrderForSingleItem

3. Match the invoice

  • Name: matchinvoice
  • Purpose: Matches vendor invoice with product receipt.
  • Input Parameters:
    • invoiceId (String, Required) — Vendor invoice number to be matched.
  • Use Case: Ensures invoicing and receipt records align.
  • Output:
    • fullyMatched(Boolean) —Whether the invoice has been fully matched
  • Custom API: msdyn_VendInvoiceMatchProductReceiptCustomAPI

Conclusion

Successfully integrating the Model Context Protocol (MCP) with Microsoft Dynamics Finance and Operations empowers businesses to create intelligent agents that are scalable, secure, and deeply embedded into their operational processes. At Confiz, we’ve seen that a well-planned MCP implementation, paired with strong technical expertise and cross-functional alignment, can enhance automation, streamline data access, and support faster and smarter decision-making across the enterprise.

Considering MCP for your AI integration strategy? We’ll help you evaluate compatibility, identify gaps, and plan a seamless rollout. For tailored guidance or to explore your integration needs, connect with us at marketing@confiz.com.

]]>
Navigating service protection API limits in Dynamics 365: Best practices for optimization https://www.confiz.com/blog/service-protection-api-limits-in-dynamics-365-best-practices-for-optimization/ Tue, 18 Feb 2025 09:09:59 +0000 https://www.confiz.com/?p=8350 Imagine you’re using Microsoft Dataverse to operate a mission-critical application when suddenly your users are encountering errors like “Service Protection API Limit Exceeded” or “429 Too Many Requests.” Panic strikes, what went wrong? What does the error API rate limit exceeded means and how can it be fixed? More significantly, how can it be avoided in the future?

Welcome to the realm of Service Protection API Limits, a security feature intended to maintain the Microsoft Dataverse platform’s seamless operation for all users. In this post, we’ll cover all you need to know about service protection API limits in Microsoft Dynamics 365, their effects, and how to handle them expertly.

What is rate limiting?

API rate limiting is a way of controlling the number of requests sent to a server. It is important because it helps the server from getting overwhelmed, and it makes sure that everything runs smoothly. If too many requests are made, the server can either say no to the request, give an error message, or take a little time before responding.

Here is an illustration of how it works:

Token Bucket Algorithm: A key rate limiting technique

One of the most common rate-limiting techniques is the Token Bucket Algorithm, used by Amazon Web Services APIs.

Understanding Microsoft Dataverse API limits


Microsoft Dataverse evaluates API limits based on three key factors:

1: Number of requests: Total requests a user sends within a 5-minute window.

2: Execution time: Combine time required to process all requests within a 5-minute window.

3: Concurrent requests: Number of simultaneous requests sent by a user.

These limits are enforced per user and web server, ensuring fair usage across the platform.

Common error messages related to API limits in Dynamics 365

Consider Service Protection API Limits as the Microsoft Dataverse platform’s traffic lights. They safeguard the platform’s availability and performance for all users by preventing any user or application from overloading the system with requests. Microsoft Dataverse reacts with particular faults when an application goes over these bounds:

  • 429 Too Many Requests error occurred in the web API.
  • Service Protection API Limit Exceeded.
  • OrganizationServiceFault fault with distinct error codes in the Microsoft Dataverse SDK for.NET.

Why should you be careful?

Knowing these boundaries is essential whether you’re creating portal solutions, data integration tools, or interactive applications. Ignoring them may result in:

  • Errors being encountered by frustrated users.
  • Operations were delayed and workflows were disturbed.
  • Lower throughput for applications that use a lot of data.

Don’t worry, though; we’ve got you covered. In order to stay within these boundaries, let’s examine the specifics.

Impact on of API limits on different types of applications

1: Interactive client applications

Interactive apps are the face of a business. Interactive apps are used by end-users to perform day-to-day tasks. During normal use, these apps are unlikely to hit API limits, while bulk operations (like updating hundreds of records at once) can trigger errors.

What you can do:

  • You can Design your UI to discourage users from sending overly demanding requests.
  • You can Handle errors gracefully – don’t show technical error messages to end-users.
  • You can Implement retry mechanisms to manage temporary limits.

Pro tip: You can avoid this by using progress indicators and friendly messages like “Processing your request—please wait” to keep users informed.

2: Data integration applications

Data integration apps are the workhorses of their system, handling bulk data loads and updates. Data integration apps are more likely to hit API limits due to their high request volumes.

What you can do:

  • You can use batch operations to reduce the number of individual requests.
  • You can Implement parallel processing to maximize throughput.
  • You can monitor and adjust request rates based on the Retry-After duration.

Pro tip: Start with a lower request rate and gradually increase it until you hit the limits. Let the server guide you to the optimal rate.

3: Portal applications

Portal apps frequently handle requests from anonymous users through a service principal account, as limits are applied per user high traffic can quickly trigger errors.

What you can do:

  • You can Display a user-friendly message like “Server is busy – please try again later.”
  • You can Use the Retry-After duration to inform users when the system will be available again.
  • You can Disable further requests until the current operation is complete.

Pro tip: You can Implement a queue system to manage high traffic and prevent overwhelming the server.

Retry strategies: Your safety net

When you hit a service protection limit, the response includes a Retry-After duration. This is your cue to pause and retry the request after the specified time.

For interactive apps

  • Display a “Server is busy” message.
  • Allow an option to cancel the operation.
  • Avert users from submitting additional requests until the current one is complete.

For non-interactive apps

  • Pause execution using methods like Task. Delay or equivalent.
  • Retry the request after the Retry-After duration has passed.

Pro tip: Use libraries like Polly (for .NET) to implement robust retry policies. Here’s an example:

What is API throttling?

API throttling is a technique used to regulate the volume of API requests. For a predetermined amount of time, this stops them from making any more requests. More aggressive than rate limitation, throttling is a technique used by servers to react when a client exceeds a pre-defined limit for a predetermined amount of time. To control API traffic and avoid overloading, throttling makes sure that a server can process requests from several clients without becoming unresponsive or crashing.

API throttling vs. rate limiting: Understanding the difference

  • API throttling: Throttling controls the amount of incoming traffic to an API over a specific period. It manages API usage by slowing down requests when a certain threshold is reached, ensuring stability and preventing overuse. Instead of outright rejecting excess requests, throttling might delay them or return a “try again later” response. This is useful for maintaining service performance during high traffic.
  • Rate limiting: Rate limiting sets a strict cap on the number of API requests a client can make within a given time frame. Once the limit is exceeded, additional requests are rejected until the time window resets. Rate limiting is often used to enforce fair use policies and prevent abuse of API resources.

Key difference

  • Throttling aims to maintain system stability by managing traffic flow.
  • Rate Limiting enforces a hard ceiling on the number of requests to prevent excessive use.

The art of throttling prioritization: Keeping systems running smoothly

Consider yourself in charge of a bustling highway, with cars standing in for API requests. While some cars are delivery trucks with a slightly longer wait time (low-priority), others are normal commuters (medium-priority), and some are emergency responders (high-priority integrations). Your role is to guarantee that traffic moves freely without creating bottlenecks.

Throttling prioritization for APIs accomplishes precisely that by preventing system overload and ensuring that vital activities are never halted by high demand, it aids in managing the flow of service requests.

Why prioritization matters?

APIs are essential to the effective interchange of data in modern financial and operational systems. Excessive API requests, however, might cause the system to lag or even crash if suitable limitations are not in place. This is where API limitations for resource-based service protection are useful.

To safeguard system performance, these constraints complement user-based limits. To keep the system healthy, requests are throttled if the total server load becomes too high. However, priority controls which requests are sent through first, thus not all requests are handled equally.

Note: Prioritization does not apply to user-based limits; it only applies to resource-based API limits.

How prioritization works?

The Throttling Manager intervenes when an influx of API calls begins to overwhelm the system, determining which requests should be handled first. Lower-priority requests might be throttled and result in a “Too Many Requests” error if system performance is threatened.

To manage throttling priorities effectively, administrators use the Throttling Priority Mapping page. This allows them to assign priorities to different integrations, ensuring that:

  • High-priority integrations run smoothly
  • Medium-priority requests get processed efficiently
  • Low-priority requests are delayed only when necessary

Understanding priority levels

Priorities are categorized based on authentication type, using Microsoft Entra ID (formerly Azure AD). Two authentication methods are supported:

  1. User-based authentication: It uses login credentials for authentication and authorization.
  2. Microsoft Entra application-based authentication: It uses registered app credentials (application registered in Microsoft Entra and an associated secret for authentication).

Priority levels and their impact

Low priority requests

  • Low priority throttled first when resource consumption is high
  • It is Used for non-critical integrations

Medium priority requests

  • It is Balanced threshold—more resilient than low-priority
  • Medium priority Ideal for regular business operations

High priority requests

  • High priorities tasks are Less likely to be throttled
  • Retained for mission-critical applications

These priority levels ensure that essential services always have the resources they need, even during high-traffic periods.

Setting up priorities in Finance & Operations Apps

Once you register your services in Microsoft Entra and finance and operations apps, you can set up priorities for integrations by following the below steps:

  1. In finance and operations apps, navigate to System Administration > Setup > Throttling Priority Mapping.
  2. Click New to create a priority rule.
  3. Choose an Authentication Type: User-based or Microsoft Entra application-based.
  4. If using Microsoft Entra application, select the Client ID of the registered application.
  5. If using User authentication, select the appropriate User ID.
  6. Assign a priority level (Low, Medium, or High) and click Save.

Note: You must have System Administrator or Integration Priority Manager permissions to set up priorities.

Pro tips to maximize throughput

  • Let the server be your guide: Moderately increase request rates until you hit limits, then rely on the Retry-After duration to optimize throughput.
  • Use parallel processing: Support multiple threads to improve performance, but keep the number of concurrent requests within limits.
  • Avoid large batches: Small batch sizes with higher concurrency are often more efficient than large batches.
  • Update legacy applications: Make sure older apps are updated to handle service protection errors gracefully.
  • Shift to real-time integration: Move away from periodic bulk operations to real-time data integration for smoother performance.

Conclusion

API for Service Protection Limit is a necessary safety measure intended to preserve a reliable and effective Microsoft Dataverse environment, not a burden. You may create robust applications that give your consumers a dependable and flawless experience by being aware of these limitations and implementing the above-mentioned tactics.

Approach a 429 Too Many Requests error with confidence the next time you run into it. Pause, evaluate, and try your request again in accordance with the server’s instructions. You can successfully negotiate these boundaries and give your apps the best performance possible if you have the appropriate tactics in place.

If you have any questions about API limits in Dynamics 365 or encounter an unexpected error, contact us at marketing@confiz.com.

]]>
How to create a model in Dynamics 365 Finance and Operations? https://www.confiz.com/blog/how-to-create-a-model-in-dynamics-365-finance-and-operations/ Mon, 30 Sep 2024 07:25:06 +0000 https://www.confiz.com/?p=7197 Microsoft Dynamics 365 Finance and Operations provides access to the base source code, which is organized into various models. These models group related objects and code for easier management. While you can’t directly modify the base objects, you can add or modify functionality by creating new models. This is done through Visual Studio, using the Dynamics 365 menu.

In this blog, we will explain the concept of Dynamics 365 models and outline the steps to create your own custom model.

Introduction to Dynamics 365 model building

In Microsoft Dynamics 365 Finance and Operations, a model is a logical grouping of elements, including forms, classes, tables, and other objects created within a Finance and Operations project in Visual Studio. Understanding models in Dynamics 365 is crucial for developers who need to organize their work effectively. Additionally, models can encapsulate DLLs, references to other models, and metadata. These models are eventually packaged for deployment to different environments.

Easy steps to create a model in Dynamics 365

Dynamics 365 model creation process involves the following steps:

Step 1: Go to Visual Studio

  1. Open Visual Studio as an administrator.
  2. Go to the Dynamics 365 menu (if you don’t see it, navigate to Extensions > Dynamics 365).

Step 2: Create the model

  1. Select Model Management and then click Create Model.
  2. After running Visual Studio as an administrator, the ‘Create Model‘ wizard will appear.

Step 3: Provide details for your new model

In the ‘Create Model‘ wizard, provide the following details for your new model:

  • Model name: Give your model a descriptive name.
  • Publisher name: Specify the publisher’s name (usually your organization’s name).
  • Version number: Set the version number for your model.
  • Layer: Choose the appropriate layer (e.g., Standard, VAR, ISV, etc.).
LayerDescription
USRThe user layer is for user modifications, such as reports.
CSRThe customer layer is for modifications that are specific to a company.
VARValue Added Resellers (VAR) can make modifications or new developments to the VAR layer as specified by the customers or as a strategy for creating an industry-specific solution.
SYSThe standard application is implemented at the lowest level, the SYS layer. The application objects in the standard application can never be deleted.
ISVWhen an Independent Software Vendor (ISV) creates their solution, their modifications are saved in the ISV layer.
GLSWhen the application is modified to match country or region-specific legal demands, these modifications are saved in the GLS layer.
SLNDistributors use the solution layer to implement vertical partner solutions.
FPKThe FPK layer is an application object patch layer reserved by Microsoft for future patching or other updates.

Step 4: Add Description(optional)

This step is optional, but if you want to, you can always add a brief description of your model. Here is an example:

Once you are done, click the Next button.

Step 5: Select the package

On the next page of the wizard, go to select the package and follow these steps:

  1. Select ‘Create new package.’
  2. Avoid using the ‘Select existing package‘ option. It is only for supporting the legacy usage of models. This is a key step in the Dynamics 365 model building process.
  3. Once you are done, click on the Next button.

Step 6: Select the referenced packages

  • On the next page of the wizard, specify which other models/packages this model should reference.
    If your model’s objects or code references objects in other models, you must reference those models.
  • You can update referenced packages later if needed.
  • Start by selecting ‘Application Platform‘ and ‘Application Foundation.’ You may need to add more references later.
  • Click on the Next button.

Step 7: Getting familiar with the summary page

You will get all the details of the Model we have provided on the Summary Page.

  1. Optionally, check ‘Create new project’ to have the system create a new project right after the model is created. This is useful if you plan to start working on objects or code immediately.
  2. Optionally, check ‘Make this my default model for new projects’ to save time if you add objects and code to this model across multiple projects. This makes the new model the default for all new projects.
  3. You can change the default model for a project later by right-clicking on the project, selecting Properties, and changing the ‘Model’ property.
  4. Click the Next button.

By Clicking next, your Model will be created, and a new project window will appear.

Create a new Finance and Operations project in Visual Studio

A project is created to associate it with a new model. Any elements you add to the project, such as an extended data type (EDT) or a table with fields and a method, will also be included in the model. Here are the steps to create a project:

  1. A new project window is automatically opened in Visual Studio after creating the model.
  2. In the new project window, the type of project is set to Finance and Operations.
  3. The newly created model is automatically selected by default in the project settings.
  4. Right-click on the project within Visual Studio and select Properties.

In the properties window, you will see that the First Model is already selected as the default model. This is because we have previously configured the option to make this model the default.

Dynamics 365 Modeling best practices

When creating a model, observe the following best practices:

  • When deciding between single or multiple models, complying with Dynamics 365 modeling best practices is important.
  • Models are intended to group related objects. For instance, if you’re developing new functionality, such as a menu item, form, table, and class, all these related objects should be contained within the same model.

Flexibility with multiple models

  • While it’s possible to consolidate many features into a single model, developers might also create multiple models to segregate unrelated functionalities.
  • This flexibility allows for a more modular deployment, enabling smaller code footprints when certain functionalities are not required.
  • Dynamics 365 model building for beginners should start with simpler projects to understand the process before moving on to more complex scenarios.

Common challenges in Dynamics 365 model building

Although creating a model in Dynamics 365 is straightforward, there are potential challenges you may encounter:

  • While multiple models offer flexibility, they also introduce complexity.
  • Many companies find it simpler to maintain a single model to avoid circular dependencies, where models rely on each other.
  • Other common challenges in Dynamics 365 model building include managing dependencies and ensuring that models are not overly interconnected.

Conclusion

Creating a model in Dynamics 365 Finance and Operations allows you to customize and extend the platform to suit your business needs. Following the steps outlined in this guide, you can successfully set up your model and enhance your system’s functionality. However, if you encounter any challenges or need expert guidance, our team at Confiz is here to help. Contact us at marketing@confiz.com to learn how we can support your Dynamics 365 projects.

]]>