What’s New In SharePoint 2013 Search

Log In to Watch How to Video Now
What’s New In SharePoint 2013 Search


Please log-in to view this video. Sign up for FREE ACCESS HERE

Learn about What’s New In SharePoint 2013 Search with Agnes Molnar.

In this video for IT Professionals Agnes will look at SharePoint 2013 Search New Architecture, User Interface, Query and Content Processing.

 

Learn about What's New In SharePoint 2013 Search with Agnes Molnar.

Learn about What’s New In SharePoint 2013 Search with Agnes Molnar.

 

Video Transcript:

So let’s let me show you what’s new in SharePoint 2013 searches first of all wed have only one search for this is very important because in 2010 we had the out-of-the-box SharePoint searched and he also had fostered for SharePoint that was a different product you had to install it separately you had to configure it separately and of course you had to integrate it with SharePoint2010 in order to be able to use it so it was a kind of not I wouldn’t say it was nightmare but it was a really complex process to install a faster for SharePoint in the proper way and still we had to search engines including fast search and SharePoint search that had to work together in SharePoint 2013 this integration process has been finished so the fast search engine and the SharePoint search engine are finally fully integrated to as soon as you install SharePoint 2013 you will get one search engine and this is the only search engine in SharePoint 2013 and this is the integrated SharePoint and fast search engine so do you have to know about it it’s much easier the initial confit ended the initial configuration of it so what we can see this picture first of all we do have the content source the content sources we have about the same set of content sources in SharePoint 2013 so we can control of course SharePoint content we can crawl science us we can crawl exchange public folders etc. etc. and through BCS you can crawl custom content sources to the first component that is interacting with those content sources is the crawler the core is the component I’m going to explain this in much more details but the crawler is the component that goes to the content source enumerate the items that has to get crawled and collect them and pass to the content processing component the content processing component is the one who processes the content items we extract the metadata very con de linguistics eutectic and finally those items will get indexed from the other side we have the front end and this front end can be any kind of client application it can be search centre it can be searched for any surf drug part it can be your application that using the Search API is the centre etc. so we do have a client application that sends some queries to the query processing component this query will get process I’m going to explain it up later and of course it gets the resins from the indexing component so the results are getting back to the processing component and a query processing component will provide those results to the client application besides this crawling indexing and querying process he also have a lot of analytics so in SharePoint 2010 a lot of information gets stored for us from the content and from the queries from the user for on those queries and all those information goes to the database the different analytics databases based on them a lot of reports and analytics can be provided let me talk about one the next new concept in SharePoint 2013 and this new concept Isa continuous Gras continuous flow is very important because before that in in the area versions of SharePoint we had the Foucault and the incremental crawling and this was fine but in some cases it was not enough so we do have this new concept of continued scroll for SharePoint content sources and this means that the content processing is akin of continues and much faster much more dynamic and can run at the very same time as full and incremental crore and this is very important and this is one of the most important things about continuous crawl because if you do have huge environment where cross takes weeks sometimes it takes even more and if you do a full or besides a full crow you can run a discontinuous cross so you can pick up the changes in an arm dynamic and I tried they and it might be very important in your environment butte most important limitation once again it’s available for SharePoint content and sources only I will talk about continuous call a bit later the next big improvement is that you can delegate some search administration task to your site collection or site administrators that means that besides we still do have the capability to do the search administration on central administration you can say or ideas site collection administrators you can work with the search schema from now so you can create your own manage properties etc. etc. and you can give them the power of taking the control and ownership their own search environments and with this you can give them the power for example the marketing can have different search solutions than the IT without involving or with a minimal involvement of the global search administrators and of course we do have lot of user interface enhancements and will explain those enhancements in much more details but one of them is this whole panel that provides a lot of additional information a preview of the document etc. etc. and also we have a new concept for displaying the resins and refinement want to with display templates and I will explain them in the UI topics let me talk about the content processing in more details now the first step of content processing is the crowing so first of all the crore start this process and he goes to the content source asking for the list of items that has to get caught in case of full crew this is the full list of the items list of all the items in case of incremental crawl this is a list of the items that have been changed since the last crawl so for example if I am the crawler and I would like to get the information about you the attendees of this class I would go to as if asking the list of your names first and as if as the horse of the content source as the host of this training will send me first the list of your names or the list of your IDs so this is the second step then the content source gives back this list to the crawler and the third step is when the cooler goes through on this list and numerous the items on this list and cross them one by one so once I get theist of your names from Asus I go through this list and I will ask as fey what’s the email address what’s the job what’s the job role of this and that user one by one if I do a full crore and of course this is a loop this is process that has to have to run until the last item on this list just remember this picture because we will use this later so once again the core first gets the list of the items from the content source and then based on this list it goes back to the content source asking for each item one by one of course it does this on multiple threads but still once red cross one item at the same time the next step is the content capture this happens with each items during the content processing so there are three types of changes that can happen and it’s important during the incremental or because during the full crawl each item will be called obviously so what could change first something can be deleted that means someone did delete it an item from the content stores and of course after the crew of the next scroll this item should not be displayed on the results so when something gets deleted we have to remove it from the index or at least we have to flag it with a deleted tag the second thing that can change is the permission settings of the items I can remove some users from the security list of this item or I can add someone so if I add someone this person has to be able to see this item on the reasons that after the next crore if I remove someone this person should be not able to see that result in the result set afterwards so this is one more type of change that can happen with the items and the next thing of course is when some metadata or the body or the content itself forget to change this is when you added the properties of the document when you edit the document itself so this is a real content change and of course the command processor picks up those kind of changes and as you can see processing those changes contains a lot of steps for example it extracts across properties creates the managed properties get the security description does some language detection it’s very important I will show you some examples how it works the different languages in in the same environment and it does a lot of our linguistic steps if dragged on metadata asset and finally does some analytics and then everything goes to the index so this is how the content processing works thank you for watching this video why not check out some more great how-to videos or subscribe to our YouTube channel for new videos as they ‘rereleased

Share this on...
Log In

Rate this Post:

Share: