And we’re back here again in this noisy busy atmosphere the ideal place to do interviews really at the ESPC kind of show floor where all the exhibitors are hanging out and all that the attendees are bustling about around us. So, we’re going to have to speak up Mark okay so I’m sitting here now with Mark Brown from the Cosmos team what’s your official title these days program manager something something? I’m the program manager is the official title right now oh yeah great and you are with Cosmos now so fun fact Mark kind of well you know you we’ve graduated to two buddies and friends and all but you used to be like my dad because you took me well you take care of the Azure MVPs before you went to it did some other things and then when it wound up in Cosmos. Right I think it was the Azure product group lead for the Azure MVPs for 7 years. Yeah something like that so when I became an MVP you were the person who was kind of shepherding the group that I was belonging to. The official cat herders. So, in that sense my dad but now we’re good friends and we meet up everywhere around the world at conferences having fun. Yes. And so now we’re doing that again and we’re of course going to talk about Cosmos about the product there has been so many announcements since, well during Ignite and since Ignite, yeah, and I can’t even keep track and then in fact it’s almost hard for you to keep track. It is hard to keep track because a lot of stuff you know just like the rest of Azure, we are constantly releasing new stuff we try to centre it around big events like Ignite yep but you can’t you just can’t fit it all in and we’ve so we had a whole flurry of announcements at Ignite there were some big things that customers have been waiting a long time for and a lot of customers wanted but we’ve kept that pace up and we’ve just been constantly streaming stuff out since then too. So, let’s break it down I can’t even keep track, so you let’s help me out. Yeah so first thing we announced was Group buy support for our SQL API. Of course, and that was a very important piece. Yeah customers really I mean of course want to be able to you know, Group buy for their queries and stuff so we think – so that was something that customers been waiting a long time on and we were really happy to be able to announce that. The other one and this is maybe even bigger is customers for a very long time have wanted a way for us to automatically manage throughput for their databases. I love this. So, we’ve been you know and this is one of the oldest tasks we had in user voice was or some sort of auto scale feature yep where we could monitor their throughput usage and then scale it up or down based upon what the actual needs were for the application. So, at Ignite we also announced a preview for what we call autopilot. Yeah. Which we kind of borrowed that name from kind of how data bricks or what data bricks calls it when they do scale up and down. Well that makes sense because I was going to ask why you didn’t call it Auto scale? yeah, I think we just kind of took a page from that. and you guys get to in Cosmos you get to name your own stuff the way you like. pretty much yeah. autopilot you go and sign up for the preview and enable your account and then create a new container and instead of fixed reserved throughput you select Auto scale and then that’s you select different tiers of throughput and we will scale your container up or down to that maximum and then back down again yeah automatically so you don’t have to manage that yourself. if there is if you have lots of users then lot of demand you get more power and when it’s like a slow weekend its scaling down to less. This was the thing customers have applications and they can have unpredictable needs they could get a spike in traffic any for whatever reason. Yeah or could have this weekly kind of cycle periods are like in in the weekdays maybe there’s a lot of traffic lotta demand during the weekend may be much less usage that sort of thing. if customers can predict the demand that then then they can easily go and raise and lower the throughput so for like a batch type scenario and more for unpredictable bursts. If something mad happens. Exactly and they don’t want to they don’t want to have retries they don’t want to slow down they want to be able to serve all those requests and not have to think about it right so anyway so we’re really happy about that that’s in preview now customers can go and sign up in the for the preview and create new container and test away and play with that yeah we’ll GA that sometime next year early next year. Okay. and then second is we did a bunch of updates for our Jupiter notebook, so we announced preview for Jupiter notebooks, and they get billed last year, and we’ve now got that GA and we. just keep adding more and more features. So, what is it Jupiter notebook? you don’t know Jupiter notebooks? Okay, so Jupiter notebooks talk to the camera know these are very popular in kind of the data science world where they can go and essentially write code to manipulate or report or do something with their data in there so these are this is a very popular tool for being able to do interactive type of work on data and they’re very powerful because you can bring in pip install libraries in there to do things like graphing or m/l or frankly pretty much anything under the Sun with these things so they’re extremely flexible yeah and you can share them as well which is also really helpful and useful with them so we just keep adding more and more features to these things using these SQL magics so they have these things called magics in there and they’re kind of like I guess you could call like a macro kind of okay it provides kind of a shortcut way to be able to do stuff and for us it’s we have magics for our SQL API so you can connect to a database in a container and then run queries against that using these SQL magics that we have in there so and then we have an all samples gallery up in there so you again go and create a new account enable notebooks and then you’ll get some sample notebooks in there you can play with those get familiar with them use them upload your own notebooks download notebooks and play with them. but so, they’re really fun to use and they actually you can make some really pretty graphs another kind of stuff with them too so like those things too so. Let’s talk about size you were talking about ml now and that brings me onto the topic of size you guys have increased the capacities again haven’t you like you can do really how big can you make Cosmos databases? well unlimited. unlimited all right. so we’re a horizontally partitioned database so we don’t have kind of that four terabyte limit that you would have with say a relational data store so you just continue we scale out right because we’re horizontally partitioned you don’t just create.. do you have ever increasing customers has been really massive? well yeah by definition there’s nothing there’s no unlimited customers because they have some limit but we do have customers that have got large amounts of data or process and or processing huge amounts of data or throughput needs if you will right so. and because Cosmos is everywhere like a ring zero service as and that goes into the data centres basically first of all. Yeah. so you’re available in all 54 data centres and? we’re available in every single region that azure is available so as a ring 0 service we have to be yeah a lot of services in Azure I also just take a dependency on us also we have to be there for that reason as well. if you’re not there and then they can’t be there that’s right pretty cool. What else is new? Many things? one other thing we announced was bulk execution support so we had a bulk executor library that we had written that was a separate library for customers to use we’ve now baked that into our .NET SDK and we’re going to add that also do our other SDKs over the coming months or whatever but bulk execution really allows you to fully saturate the throughput you have so an idea here is when you do an insert or an update or a delete into Cosmos that’s handled by a thread sure if you have to do hundreds of thousands of those things you need hundreds of thousands of threads to be able to do that. and it’s inconvenient. But what bulk does is that it takes and batches all those up and then shoots them all across the wire using a single thread rather than hundreds of thousands of threads so. and so, you basically have support for that in the SDK so you as a user don’t have to know that is done. that’s correct so if you have scenarios where you need to do essentially bulk types of operations yep you can really quickly and easily enable bulk support in the connection policy right out of the SDK and then do all of your bulk operations so it’s great because it really does allow you to fully saturate all the throughput in your database and be able to utilize that. was that was SQL or for? yeah for our SQL API right because this is available through our client SDKs. got it yeah. another one as well is batch support so batch is also where I want to do bulk operations if you will but over a partition key range or over partition key right so I want to delete a bunch of data with a single partition key value or do some operation on a large batch yep so that was another feature that we had in there and then some more stuff so we have support for private endpoints as well yeah right this brand-new came out of the networking team we already have support for service endpoints for virtual network servers endpoints we now support for the private endpoint support within there as well. That is a key feature for many corporations that that simply do not want to allow to have an external facing endpoint for their data. yep and so now you connect it in through a to the inside of a VPN dip tunnel a virtual network inside of that now it completely. all the endpoints are currently private now. yeah that’s brilliant. right so it’s just some other stuff from the Ignite so we had some updates for our all of our management stuff all of our control planes so big updates to our arm template support within there so that was big and then also we’ve got in preview now a new version of our Azure CLI so for customers. What’s new with that version? everything it’s all new basically better support across all the service area within Cosmos TV so support for all the different database API is that we’ve got just better support for everything across the board. Is that you go to use the CLI or? I’m personally I’m a CLI guy [Laughter] [Music] You are? No Power Shell for you? Eh. you can’t comment. PowerShell is okay by the way we’re working on better PowerShell support – yeah, we’re hoping to come out with a preview on that sometime early next calendar year. great so the automation story with a better arm support and the new version of the CLI all up automation. Just an all up better automation DevOps story great for Cosmos and this is what customers are asking for this they want to be able to manage their Cosmos environments and their production environments and it’s super important so this is something that we’re focused on. more stuff that’s happened after Ignite. go for it keep going. MongoDB 3.6 support yeah so. It’s not my cup of tea but I get it for people that… We have a ton of customers that are using that yeah they want to fully they come to Cosmos because they want to fully manage database environment they don’t want to run or manage it on a bunch of Ems we handle all the replication it’s all managed for this right so. that’s really one that’s been one of them at larger set kind of sales pitch points for myself as well when I think about that you could go and run your own Mongo stuff and manage it in the care of it and all that but not only when you I mean when you get a fully managed service such as Cosmos instead you well first of all you don’t have to manage anything but second you get so much more that you never could have built yourself like replication and all those things that are in the platform right and that just blows Mongo out of the water well if you ask me. I mean comparing it to running on VMs absolutely right so and that’s the whole idea is you don’t want to spend time and effort managing the infrastructure in the environment for these things just all the only thing you have to focus on is your model your partition key in the design of your database and then just. with that three point six support that you mentioned it totally means that the applications are supporting that version of the protocol they can’t even tell the difference that who they’re talking to the API it’s the same API so they’re just talking to a Cosmos and they don’t even know yep. so, some other things as well for Mongo is, we now have support for change streams for MongoDB is. yeah so this is a similar feature to what we have with change feed yeah and Cosmos DB we’ve now made that available via the chain streams interface which is what mongos flavour of that is so that runs on top of our change feed support for our SQL API it’s great now available for MongoDB API and we just also announced that support changed feed support for Cassandra API oh right cool so lots of great stuff happening with our Interop API’s not only that but we also just announced a new private preview feature that customers can contact us and sign up for but the ability to be able to do a live data migration from on-prem Cassandra right into Cosmos right so basically reducing the friction yeah and it’s a really interesting technology we’ve built to be able to do that using essentially an agent that you install on a Cassandra node yeah and then it essentially acts as a replication agent to and from Cassandra on-prem or wherever into your Cosmos I mean there are so many features it’s in that’s yeah no I think that’s it yeah I’m having to go through my list cause there’s just so much stuff I need to write it down. So many things well it’s just brilliant it’s been lovely talking to you as always Mark and basically signing off again we’re here from the ESPC conference take care guys. Bye.