Sending High Volume Traffic to Azure Service Bus in a BizTalk Message Only Solution

Posted: August 3, 2012  |  Categories: Azure BizTalk Uncategorized
Tags:
Why is Azure Service Bus different than other destination when talking about High Volume Traffic in a BizTalk Message Only Solution, there is a hard limit of 100 concurrent connections per queue, topic or subscription.  Now 100 connection sounds like a lot and should satisfy most
requirements, but quickly you discover that 100 is not a large as you think.
 

The scenario that we are going to look at in this blog post is 5 minute status updates being sent to 600 partially connected clients via an Azure Service Bus Topic with 600 subscriptions. The status updates are being return from a SQL query in an enveloped message, so all 600 status update hit the BizTalk message box at the same time.  The send port is configured for the clients retrieving the message via a REST client, so we are using the webMessageEncoding and the netMessagingTransport in a WCF-Custom adapter.  We are also using transportClientEndpointBehavior for security credentials and the serviceBusMessageInspector (http://msdn.microsoft.com/en-us/library/windowsazure/hh532013.aspx) behaviour for promoting the MachineName for the Subscription to use for filtering.

Ok, let’s start the polling, the 600 message hit the BizTalk message box and everything looks good, but then we start getting these warning showing up in the event log:

SNAGHTML38ab838a

Now we have hit the 100 concurrent message limit with Azure Service Bus, these are just warning and the messages will be retried, not great but we can live with it.  Make sure that you have the retry setting set to expect this, we are using:

image

The messages finally all get delivered to the destination topic and using Service Bus Explorer (http://code.msdn.microsoft.com/windowsazure/Service-Bus-Explorer-f2abca5a) we can see the messages in the correct subscriptions.

We check the event log and now are seeing these 3 different errors:

SNAGHTML38afeda2

SNAGHTML38b0885b

SNAGHTML38b0fe26

We found out that these errors were a known issue that happens when the WCF adapter is under strain, all of the messages still got delivered, but the error are a bit unnerving and we will try to understand them and post some details in a future blog post.

In the background we were also getting some errors on our TMG (Forefront Threat Management Gateway) Server about our machine Flooding the network, so it was necessary to add the IP address of our BizTalk Server to the Flood Mitigation IP Exceptions list in TMG.

image

Next Steps, now how do we throttle the number of outbound connection that BizTalk is attempting to make, we tried numerous different setting with the help of Microsoft.  The settings below help mitigate the warnings and errors but do not completely solve them

image

image

SNAGHTML38c1e94f

So what we found it that the nature of the netMessagingTransport was such that the number of thread did not have much impact on the number of connects that it attempted to make, we have to assume that everything that the netMessagingTransport is doing is asynchronous, so it is not completely possible to control it.

Alternative that we looked at:

REST: with the Azure Service Bus the REST API does not have the limit of 100 concurrent connection because it does not use connections, but what we found was the overhead of getting the security token and sending the message for every message made the REST solution to slow.

Ordered Delivery: with BizTalk Send ports they can be set to do ordered delivery and in the past this has been a good solution for controlling SQL deadlock when you have large bursts of data being sent to SQL, but again this slowed down the writing of message to the Azure Service Bus topic so much that it was not able to meet the performance requirements.

Throttling Orchestration: with BizTalk you can create an orchestration that receive all inbound messages based on a correlation set and then limits the number of messages being processed at once, this approach would have limited the number of messages reaching the send port at once and thus prevented the send port from trying to create more than 100 concurrent connections, but this is a messaging only solution and the introduction of a controller pattern orchestration was not considered.

Custom LOB Adapter: with the BizTalk Adapter Pack the SDK gives you the ability to create a custom LOB adapter, now this sounds like heaps of work to solve a throttling situation, but we remembered from some projects in the past that the new SAP LOB Adapter has the ability to control the number of outbound connections, so we create a custom LOB adapter and then looked to see if the feature to control the number of outbound connection was specific to the SAP LOB adapter or part of the framework.  Luck for us the ability to control the number of outbound connections was part of the framework.  We found that the default value for the MaxConnectionPerSystem in the ConnectionManager was 100. We created a simple Custom LOB Adapter using the Microsoft.ServiceBus.dll and exposed the MaxConnectionsPerSystem of the frameworks ConnectionManager.  We were then able to send all 600 messages using a MaxConnections of 25 without receiving any errors and within the necessary time requirements.

We hope this blog post help you understand the issue with Sending High Volume Traffic to Azure Service Bus in a BizTalk Message Only Solution, please stay tuned for our post on “Creating a WCF LOB Adapter to overcome Outbound Message Throttling”

#1 Azure Monitoring Platform
turbo360

Back to Top