Resources

Glossary of terms

In SEO, you are sure to come up against lots of abbreviations, jargon, and technical terms. As this can be overwhelming for those new to the practice, we have put together a glossary to keep you up to date.

Demystifying SEO Language

Our comprehensive glossary is your key to unraveling the world of SEO terminology, making it accessible and understandable for beginners and experts alike.

Cached Page

A cached page is a web page that has been saved either by a search engine on its servers or by a user’s browser on a computer or mobile device. This happens when a user searches for a web page on the browser and, then, this browser stores a copy of web HTML pages and images as web documents.

Cached pages are not usually recommended as the digital world and its web pages change frequently, meaning that if a web page has been changed and the user enters a cached version, the user will be seeing the previous version of it and will miss all the new updates. There are two ways of fixing cached pages; either we ‘refresh’ the browser’s search a few times or, if it seems it is still not working, we can clear our browser’s cache.

However, an outdated version of a web page is not always negative, basically because when using a cached version of a web page this will be loaded quicker than the last version of it. 

Caching

Caching refers to the process of sorting data in a fast access hardware, commonly known as cache. A cache often stores a range of temporary data as a way to improve the speed of the search engine and its browser.

Some benefits of caching are:

  • Digital performance improvement
  • Reducing the cost of the data storage
  • Reducing the load on the backend
  • Predicting digital performances
  • Eliminating database hotspots
  • Increasing read throughput 

Canonical Tag

A canonical tag is a HTML element in the form of a tag inserted in the source code of web pages indicating whether its content is original or not. These tags work as a warning to the search engines, which will tell them where the user should be redirected to when performing a search.

The canonical tag can sometimes be seen as similar as the 301 redirect code, but these two codes are far from being technically similar. While a canonical tag serves as an indicator for search engines about where to take the user when performing a search depending on a website’s original and unique content, the 301 redirect code (or other redirect codes) tells browser that a web page has moved permanently to different page, changing its former URL. Whereas with the canonical tag the user will be directed to a web page apparently well indexed by search engines, when it comes to a redirect code, the user would be redirected to nowhere as that web page will no longer exist in that version searched.
 

Canonical URL

A canonical URL is the URL of the one page Google presents to a user after a search as the most original and unique page from a set of duplicate pages on your site. Also known as canonical link or canonical tag, canonical URLs first appeared in February 2009 for the need to identify duplicate content and pages in order to select the original ones.

Canonical URLs or tags are essential when it comes to SEO, as these tags help indexing pages correctly that will, at the same time, result in a better search from Google’s algorithm. It is also important to always check that your website’s content is unique to other content on the site and to that of competitors, otherwise it will be identified as duplicated content, affecting organic rankings. Using canonical URLs will help us avoid these inconveniences.

To implement canonical URLs on a site:

Add the canonical tag to the head of the page, like this: <link rel=”canonical”>. 

It should not be added to the body, as if it is used more than once, Google will ignore the tag entirely. 

Canonical URLs must never be “noindex”, “disallow” or “nofollow” links, as if Google can not read that code it will simply ignore it. 
 

ccTLD

A ccTLD (country code top-level domain) is a territorial domain extension that belongs to a specific country. For example, as the territorial code for Germany is DE, its territorial domain extension would be ‘.DE’.

ccTLD extensions are always made up of a two letter code, which at the same time is used to catalogue each country to the international standardisation standard ISO 3166. Beside the existence of these ccTLD extensions, which basically are subject to the requirements determined by the domain regulatory corporate name of each country, there is also the possibility of using the gTLD extensions (generic-top-level-domains), which are subject to international regulations.

When it comes to SEO, choosing between using ccTLD or gTLD extensions means a great deal, though the truth is that ccTLDs are known to be the strongest way to tell users and search engines that that site and its content is specially targeted to a determined country or region. This results in Google reading that site as to be specifically relevant to that geographic marked area by the ccTLD and so it should be shown on that area’s SERPs.

Citations

A citation is an online reference of your business name on another site, featuring the business’ name, address, phone number and it may even incorporate a direct connection or link to your site.

When it comes to SEO, citations are essential for a website indexing and its rankings, as these references help crawlers identify the relevance of that site and allows it to decide where to rank it among its competitors. Citations are mainly considered to be key factors in local SEO.

The are many opinions around about which citation websites are best when in the SEO world; nevertheless, SEO experts agree that the main citations to be considered for local SEO are:

  • Google My Business
  • Bing Maps
  • Facebook
  • Yelp

Click-through Rate

CTR (click-through rate) is a metric in Internet marketing that measures the number of clicks advertisers obtain on their ads divided by the number of times the ad has been shown (also called impressions).

The resulting formula looks like this: clicks / impressions = CTR. For example, if an ad has received 15 clicks and 100 impressions, the CTR obtained would be 15%.
This kind of metric is mostly used as a tool to measure a digital campaign’s impact on our audience. So achieving a high click-through rate is essential for a business’ PPC success (pay per click), as it will affect both the quality score and, at the same time, how much advertisers will be paying every time someone clicks on their search ad.

The main platform used to measure a CTR  is Google Search Console, formerly known as Webmaster Tools, which allows marketers to consult its business pages’ CTR on Google’s search results.

CTRs have become one of the most essential factors affecting an SEO ranking. It is important to make our links attractive in order to accomplish a higher number of clicks on the SERPs, which as a result will make our CTR increase.
 

Client-side and Server-side Rendering

Client-side rendering refers to when a website could have been generated in the browser, whereas a server-side rendering refers to websites generated on the server. To explain the differences:

Client-side rendering:

  • The server sends a JS code to the browser, which interprets that code in order to create the HTML or CSS.
  • The initial load requires more time, and so as to help the load a loader or a skeleton might be added to it.
  • Data is quickly performed after the first load.
  • It affects the SEO negatively as the HTML code is empty.
  • It often uses SPA, static sites and web applications.

Server-side rendering:

  • The server sends a complete web page (HTML, CSS and JS) to the browser.
  • The initial load is faster.
  • The view of the website is built in the backend by the use of templates.
  • Search engines can track the website for a better SEO.
  • A CSR site can be transformed to a SSR site by using special technologies, such as Gatsby, Next or Nuxt.
     

Cloaking

Cloaking is an SEO technique based on showing different content to web users and crawling robots like GoogleBot. Though it started as a popular technique among webmasters for its quick and proper results, it turned out to be less effective over the years; ending up being penalised by Google for being considered black hat SEO.

Cloaking was formerly used as a way to optimise the content shown to Bots without granting a good user experience.

Some cloaking techniques often used are:

  • Programming the server to display different content depending on the browser that made the page request. For example, when a search is made on Google, the content that would be showing on it would be the highly optimised one.
  • Using redirect with JavaScript since, until recently, search engines could not read this code and run it, so it could be used to present different content to robots and people.
  • Showing different content depending on the IP address to the browser from which the visitor is coming.
  • There are still some SEO experts using cloaking and other black-hat SEO techniques, aware that Google might penalise them. However, if you want to be sure to be using any cloaking technique on your website, you can use the ‘Crawl as Google’ function from Google Search Console.
     

Co-citation

Co-citation is an SEO term that refers to when our website or company is mentioned or cited by two independent sources that are not linked in any physical form with each other, but only through its related content.

Search engines will use the co-citation to establish a map of content related sites, so as to decide which sites are the most relevant and where to rank them on the SERPs. 

Co-citation is often related to ‘co-occurrence’; an SEO term that basically identifies similar keywords between different websites, whereas the co-citation refers to the fact that a website is being mentioned by two other different sources, not necessarily linked.

Co-occurrence

Co-occurrence is an SEO term that measures references after ideas and concepts based on keywords that occur between different websites. In other words, in order for Google bots to crawl a piece of content so as to identify co-occurrence, bots look for similar keywords with close proximity presented on multiple websites. These searches include keywords that are similar to each other (not identical) and based on the same theme.

Co-occurrence is often related to ‘co-citation’; another SEO term that refers to when a website is being mentioned by two other different sources that are not necessarily linked, whereas co-occurrence identifies similar keywords between different websites.

When it comes to SEO, co-occurrence results in Google presenting a website that is not owning that exact title or description we have searched for. How does that happen? By association (and co-occurrence) of ideas and terms, Google can find concordances in the body of the text shown in its results, because of the keywords used and so make associations and understand that that page needs to be presented to the user after that precise search.

Commercial Investigation Queries

A commercial investigation query is that query the searcher writes on the browser to compare different results in order to find the option that suits best the searcher’s desires or expectations. This kind of query can be done only for the user’s intention to purchase. Some commercial investigation queries examples would be:

  • Best store in London to buy Ethiopian food
  • Apple music vs Spotify
  • Best wine cellar in the city to provide your restaurant
  • Best travel package to travel to Disneyland

When it comes to SEO, these kinds of queries will provide you, as a business owner, relevant information when establishing a segmented target, searching for potential clients/partners, assessing your competitors and so on. Commercial investigation queries will also provide you with best keywords that will, as a result, help to optimise your website for search engines.

Competition

The term competition in SEO refers to all those websites that will be displayed on the SERPs when the user types the same search terms as you. Those websites will, after all, become your direct online competitors and you all will be competing for the same keywords or search terms.

Computer Generated Content

Content Delivery Network (CDN)

A content delivery network refers to a set of servers installed in different locations of a network containing local copies of certain content (images, documents, videos, websites, and so on), in order to serve that web content over a wide geographic area. The main goal of content delivery networks is to offer multiple PoPs (Points of Presence) outside the origin server.

These data centres keep the online world connected by bringing content to the people on the Internet, regardless of their geographic location towards other individuals or even the website’s main server. As an example, you would be using a content delivery network, each time you go on Amazon or Facebook.
Content delivery networks mean a great deal for SEO strategies, as these networks process users’ requests faster, allowing websites to better manage their traffic and resulting in a more satisfying user experience.
 

Content Gap Analysis

A content gap analysis studies the current content performance of a website, identifying aspects to improve and establishing the differences with its competitors. It is an essential part of a good content strategy, as it allows a company to identify opportunities and optimise errors.

There are four steps or questions known for a good content gap analysis:

  • Where are we?
  • Where do we want to be?
  • How far are we from our SMART goals?
  • How are we planning to accomplish our SMART goals?

A content gap analysis basically starts with an SEO level current situation of a site. In that analysis, a site should be analysing its current content, tracking what keywords we are positioning, what content is generating better or worse traffic and, last but not least, the keywords that visitors are using to find our website.

Content Hub

A content hub is a centralised online web space or tool that offers curated and organised content of a brand or a specific topic so that users can access it in a single space. In other words, using a content hub will help you capture user’s attention towards your business thanks to segmented publications based on your audience’s interests and preferences.

When it comes to SEO, using a content hub would help your strategy significantly because:

  • It improves the user experience
  • It generated brand authority
  • It increases visibility and traffic
  • It generates engagement with the user
  • It helps generate leads

Content Marketing

Content marketing is a marketing strategy focused on the creation and distribution of relevant content, in order to attract, acquire and draw the attention of a defined target, with the aim of encouraging them to become its customers.
Content marketing was formerly based on written pieces, such as corporate magazines. However, this has currently been moved to a digital space, where we find content marketing in the form of blogs, ebooks, infographics and videos, among others.

What are the benefits for using content marketing?

  • It complements any SEO strategy, as browsers positively value a web with high quality and updated content.
  • It can help improve the brand’s reputations and the branding itself.
  • It can help increase the audience’s loyalty, as websites with resources and content of great quality will offer a better user experience.
  • It can help improve customer service.
  • It improves Public Relations.
  • The content itself feeds the company's paid channels and its social networks.
     

Content Relevance

Content relevance is a criteria used by search engines to qualify a website’s content as to be of great or poor quality. As a result, it serves to make a website better or worse positioned on the SERPs.

Content relevance is essentially related to a good content marketing strategy and to a proper SEO strategy because of the following benefits:

  • It improves organic rankings
  • It attracts more traffic
  • It improves SEM strategy (paid marketing)
  • It increases the brand’s authority
  • In order to ensure a good relevance to our content, we should take care of:
  • Properly understanding the user
  • Working our website’s structure 
  • Generate original and unique content
  • Indexing the content properly

Content Syndication

Content Syndication refers to giving permission to other website’s authors to republish your own content. This practice is usually implemented when an owner of a website sees that the website is not reaching the expected audience, which is eventually a waste of resources.

Content syndication allows users to amplify their audience by sharing their own content to other websites and, at the same time, it helps organic traffic to the original website as the syndicated websites will tell the search engines from which website they are sharing the content in their page, and so the search engines will know the quality and reliability and will, eventually, show better ranking.

It is important to know that syndicated content is not considered as copy pasted content, but as an authorised distributed content.

Conversion

A conversion is any action carried out by a user on a website that generates benefits and value for that business. The action of conversion is one of the most important actions an online business expects to see on its website. That is why the company will be implementing digital marketing strategies that will guide the user through the five stages of the funnel in order to keep the users attention as long as possible and to eventually achieve a higher conversion rate.


The conversion process happens when a key action takes place inside the strategy that makes the user become from a simple persona looking around to a lead, a fan, or a client, among others. Although the conversion stage is highly important to a company’s marketing strategy, none of the 4 other funnel stages must be forgotten, and is essential to establish the business most important goals that we are expecting to accomplish with that precise strategy.


What are the main factors to consider in order to get a great conversion rate?

  • Establish clear goals.
  • Define the indicators.
  • Measure the conversion rates by using established metrics (KPIs).
  • Optimise the results, preferably by using an A/B test, or in case that does not result in any clear conclusions, proceed to restructure the website architecture and the whole strategy from the beginning.


However, there are some factors that can negatively affect our conversion rate and that must be taken into consideration. These are:

  • Unattractive and low offer.
  • Poor content.
  • Unfriendly navigation and non likeable website design.
  • Bad procedures, related to forms to be filled, confusing questions, or too much data asked to the user.
  • Poor usability.
  • Excessive loading and downloading time.
  • Bad indexing.
  • Poor internal browsers.
     

Conversion Rate (CR)

Conversion rate (CR) is the percentage of users who perform a specific action (conversion) in a website, such as purchasing, downloading, registration or even making a reservation, among others. The CR is obtained as a result of the division between the number of goals achieved and the unique users who indeed visited the website.


The conversation rate can be applied to an entire website or, instead, to each one of the products or services offered. Marketers actually recommend the last one, applying the conversion rate to each product/service, as it will give more real and clear results. This KPI will allow us to analyse many situations in our e-commerce, such as whether a client does several purchases in a single visit, or if perhaps a client makes different visits in different moments or days with a single purchase in every visit, among others.


Some factors that can affect the improvement of a conversion rate are:

  • Functionality
  • Accessibility
  • Usability
  • Intuitivity
  • Persuasion 

Cookies

A cookie, in the digital world, refers to a data packet that a browser automatically stores on a user’s computer every time this user visits a web page. The main purpose of cookies is to identify a user after their interaction with a web page so as to offer them more appropriate or specific content based on those browsing habits, the next time they visit a web page.


Cookies allow web pages to identify your computer’s IP so as next time you enter those pages, cookies will remember you, your last searches and, for that, your preferences. Just as simple as to know in which language to present you a webpage depending on the version you last visited. The ability to login in a web page without having to remember or introducing your credentials (username and password) is because of the use and allowance of the cookies on that web page. Cookies will remember those credentials and so next time you want to enter that web page, cookies will present you the chance to login with those credentials or else different ones, in case you are not the person belonging to those stored data.


However, it is always important to be aware of cookies and online security, as we are basically telling the Internet our mostly personal information, such as bank account credentials, health insurance credentials, and so on. However, the truth is that as far as we know cookies cannot transmit or execute viruses or either install malware such as Trojans or spyware. Anyway, every time we visit a new web page, or we login somewhere, the system will ask us whether we accept the use of cookies on that precise web page or not.


Cookies are so relevant in the digital world because it allows a company to segment the audience and differentiate one user from the other, it improves the user experience, it serves as a statistics data collector, helps brands to create remarketing campaigns (retargeting), and also it helps to have a better knowledge of the user’s behaviour.
 

Core Web Vitals

Core Web Vitals, also known as Main Web Metrics, are metrics used to evaluate the performance of a website depending on its user experience (UX).


To ensure that their websites do not fall behind in rankings, businesses everywhere have been shifting their focus to prioritising user experience; analysing and making the appropriate changes to make sure that each of their indexed pages are as accessible and as user friendly as can be.


The Core Web Vitals consist of three important measurements that score a website visitor’s user experience when loading a specific webpage. These metrics score page speed - how quickly the content on the page loads, responsiveness - how long the browser takes to respond to a user’s input, and visual stability - the stability of the content loading in the browser.

  • Largest Contentful Paint (LCP): LCP is the time it takes to load the content from a user's perspective. (Page Load Time). In other words, it is the time taken from clicking on the respective page link, to how long it takes for all the contents on that specific web page to load up. For the best result, your LCP score should not be more than 2.5 seconds.
  • First Input Delay (FID): FID focuses on user interactivity with the specific web page. (PageSpeed). Put simply, it is the time it takes to fully interact with the web page or a specific task on that page. For example, the time it takes from arriving on the website and then clicking on an option in the menu's tab. For the best result, FID score should, at most, be 100ms. The less time, the better.
  • Cumulative Layout Shift (CLS): CLS focuses on the stability of the web page from the user's perspective. In other words, the emphasis is on the visual stability of the elements of the web page. For example, focusing on the movement of any video pop-up, audio content or any visual element when the web page loads, thus making the user forget the initial location of the specific element or the visual. For the best result, CLS score should at most be 0.1. Once again, the lower the score, the better.


In addition to the newly added measurements, the Core Web Vitals update also includes the existing signals:  

  • Mobile Friendliness - focuses on the compatibility of the web page to display it efficiently on a mobile phone browser.
  • Safe Browsing (Virus Free) - making sure the website is safe to view. No hidden viruses, malwares, etc.
  • HTTPS Security - making sure the website is HTTPS secure (which can also be noticed from the URL tab in any browser for the specific web page).
  • No Intrusive Interstitials (Less Pop-up) - focuses on making the content cleaner. As in, getting ads or pop-ups when the user is in the middle of an article and they were not expecting it 


Therefore, Core Web Vitals have become an important factor when determining the new overall page experience score, but how do we measure Core Web Vitals?
Core Web Vitals measurement metrics can be seen from the performance of the 3 main ranking factors listed above - Largest Contentful Paint (LCP), First Input Delay (FID) and Cumulative Layout Shift (CLS).


As mentioned above, the significance of the Core Web Vitals has been noted by many businesses who are wishing to improve their rankings. Hence, all the popular tools for measuring the performance metrics have been updated to incorporate the new update. 


Popular tools such as Chrome User Experience Report, PageSpeed Insights and Search Console have all been updated and can now be used to determine the metrics.


Is the optimisation of Core Web Vitals possible?
 

Of course it is. Here are some final tips to further optimise each Core Web Vitals signal.
 

For LCP:

  • Remove any third party scripts or larger page elements - this can be done with the help of Google Tag Manager.
  • Optimise CSS files.
  • Use of image sprites and reducing unnecessary codes.
  • Optimise image file sizes.
  • Remove web fonts.
  • Reduce Javascript.

For FID:

  • Remove any third party scripts or using browser cache.
  • Reduce JS execution time - minifying or compressing the code will help achieve this.
  • Minimise main thread work.
  • Utilise web workers to do tasks off the main thread.
  • Use small transfer sizes and low request counts.

For CLS:

  • Set up specific attribute dimensions for any media or visuals. Use size attributes on images and video elements.
  • Never insert content above existing content, except in response to a user interaction.
  • Use transform animations instead of animations of properties that force layout changes.

Cornerstone Content

Cornerstone content is basically the pillars within a website’s content. In other words, cornerstone content refers to all that content (articles, posts…) that stand out on a blog or on a website because it highly represents the brands. The one content the company is mostly proud of. 


The main factor a company should taking into consideration in order to generate relevant content to be tagged as cornerstone content are:
 

  • Establish the keywords that will help a URL rank and why.
  • Use related content as a complement.
  • Optimise the web page or create if it has not been created yet.
     

Crawl Budget

Crawl budget refers to the time that Google assigns to its crawlers to crawl each website. Put in other words, crawl budget can be understood as a metric that indicates the number of times a website’s pages are visited by a search engine’s algorithm.

Crawlability

Crawlability refers to how easy it is for a search engine to read and interpret a website. That action is best known as “crawl”. Crawlability is mostly related to SEO optimisation and indexing strategies in the digital sector.


Some actions that can help to make a website more crawlable are:

  • Improving a website’s sitemap
  • Using internal linking in a smart way
  • Choosing a good hosting service
  • Optimising meta tags

Some factors that will make a website less crawlable would be:

  • Not having a proper sitemap
  • Having broken links, 404 errors or dead-end web pages.

Crawler

A crawler, also known as a spider, is a small software bot (robot) that analyses and reads a website’s code and content, from one page to another through its links, in order to categorise and index that web page and its competitors. They basically work as data collectors on the Internet, and once they recover all the information about the website and its business, they study its content, user experience and other things to decide where to position each website on the SERPs.

Crawlers can although be blocked from your search engines by using the robots.txt file. This will forbid crawlers to go into your web pages and get data from it, but at the same time it will restrict the information these softwares get in order to give you better rankings.

Crawler Directives

Crawler directives are  the rules search engines can get from crawlers as to tell them how to behave on that precise website. These directives can:

  • Tell a search engine whether to follow or not links on that page.
  • Tell a search engine not to use a page in its index after it has crawled it.
  • Tell a search engine to not crawl a page at all.

The most popular crawl directives are ‘robots meta directives’, also known as ‘meta tags’. These serve to crawlers recommendations on how to crawl or index a specific website.

Crawling

Website crawling is the journey that a small software bot (a crawler) makes to read and analyse the code and content of a website. The importance of ‘crawling’ lies in the fact that if the crawler cannot “read” your website, your website may not rank at all. Your website should be optimised for the Crawlers as they already have a lot of work. Hence, optimising your pages and simplifying their job will help your rankings immensely.

This information is found in servers. Crawlers identify the number of websites hosted by each server, and work their way down the list.

They do this by visiting each website and collecting data. This data includes:

  • How many pages the website has
  • The website’s different types of content: Text, video and image content (additional content formats include: CSS, HTML and JavaScript).

Crawlers are smart and meticulous. This data-collecting process is repeated constantly, meaning that the Crawlers can keep track of changes made to a website (for example, adding a webpage or deleting an image).

Critical Rendering Path

Critical Rendering Path (or CRP) is a series of steps performed by a browser in order to convert the HTML, CSS or the JS codes into pages that will be able to present the user with a message and provide them with the best possible user experience.

These main steps are:

  • Building the DOM tree (DOM ‘Document Object Model’)
  • Generating the CSS Object Model
  • Generating the Render Tree
  • Setting the layout on the web page architecture
  • Presenting (painting) the page’s pixels on the device screen

The top three factors to keep in mind in order to improve that critical render path process are:

  • The number of critical bytes
  • The length of the critical path
  • The number of critical elements
     

CSS

CSS, also known as Cascading Style Sheets, is a computerised language used to style elements written in a markup such as HTML, in order to separate the content from the visual representation of the site itself. This kind of language is somehow a graphic design language that allows you to create pages in a more exact way and apply a range of styles to them, such as colours, margins, fonts, shapes, etc.

This defined design of a web page will give you greater control over the final results you expect to get from that website and its pages. Another digital term extremely related to CSS language is the HTML, which is basically the big picture of the language. In other words, HTML is the markup language, which forms the bases of a site), whereas the CSS language emphasises the styling of it, the aesthetic part.

There are three ways of implementing a CSS code to a website, the internal and the external way.

  • Internal: These kinds of CSS languages will be loaded every time a website is updated, which can increase load time and, as a consequence, reduce the user experience drastically. Mind also that internal CSS cannot be used in different pages at once.
  • External: These languages will be made by a .css file, which means that you can do all the styling in a separate file and then apply it to any page you want. It will also translate to an improvement of the user experience.
  • Inline: This type of CSS works with all those elements containing the <style> tag, which means that every component of the page has to be styled and contain that precise tag.

Discovery Call

Unlock Your Digital Potential

Book a discovery call with our experts and discover how Viaduct Generation can supercharge your online presence for substantial business gains.

Book Now
Scroll To Top