seo on page
  • Home
  • /
  • Blog
  • /
  • Seo
  • /
  • SEO On page: Ottimizzare i contenuti di un sito web

      Mirco Cilli

Facebook and Instagram: Like…. Whether You Like It Or Not!

Optimising every aspect concerning On-site SEO is a fundamental activity that needs to be developed in order to get good positioning on web search engines.
It goes without saying that a deep detailed knowledge of the most important techniques and aspects to develop to make a website grow is paramount.

Content Index

  • On-Site SEO - Definition
  • On-Site SEO - Objective
  • On-Site SEO - Main Actions
  • Internal Structure and Links
  • Navigation Path
  • Site Map
  • The Robots.txt File
  • Canonical Tag
  • Scan Budget
  • Performance and Mobile

On-Site SEO - Definition

On-Site SEO is the activity that includes all the technical work that has a global impact on a website, allowing better positioning on web search engines.

On-Site SEO - Objective

The goal of every activity developed by On-Site SEO is to organise and provide the best site structure possible. From an operational point of view, this means to depict a quality and quantity overview on the information placed in every web page of a website.
After the building of the website structure, and after the mapping of the accesses, SEO works strategically by searching keywords to flush out the search intents of the users, bringing them to light with the highest number of keywords possible.

On-Site SEO - Main Actions

When it comes to SEO optimisation, every website layout has to be thought up and designed on the base of a hierarchical structure (tree-structure), in order to make web surfing as simple and functional as possible.
Once the Macro and Micro hierarchical structure of the pages in the website is identified, it is possible to start working on a series of technical factors linked to the On-Site SEO activity.

Internal Structure and Links
Always keeping in mind the tree-structure layout a website needs to be built on, it is paramount to work on the creation of a good structure of internal links.
This leads to the achieving of two specific goals:

  • Users: it enhances the navigation and the experience on each page, making it easier to surf the website between different categories and subcategories
  • Web search engines: it makes easier to understand the structure of a website, helping to understand the web page topics too, thanks to the adding of an anchor text, and fuelling the Link Juice (value) transfer activity from the starting page to the destination page

Navigation Path

Aside from the internal links, Breadcrumbs are fundamental to define the hierarchical structure of a website as well.
They are HTML text paths (links) allowing users to check their position on a web page according to the starting point (most of the time home pages).

Breadcrumbs aim at leading users while surfing the website, guiding them among categories and subcategories.
They increase the value of the user experience and they enhance the positioning of a website too, because a clear indication to web search bots, related to the hierarchical structure of a website, helps create context and classify different contents of each web page.

Example
Site.it/category > Breadcrumb: Homepage > Category Name
Or
Site.it/category/subcategory > Breadcrumb: Homepage > Category Name > Subcategory

When it comes to positioning both the examples are correct. The main thing that matters is that the web page hierarchy is respected.
Implementing Breadcrumbs requires adding a specific Markup to the site web pages, using the
schema.org dictionary definitions correctly. Decoding can be manual or made by using specific plugins that support the use of structured data.

Site Map

The website architecture and structure described can be used by web search engine crawlers to scan and index the contents of a web page.
Hierarchy and Content organisation are collected in a Sitemap, mostly .xml files helping the understanding of contents placed in a website.
It is appropriate to underline the fact that web search engines can find contents on a website even without creating a Sitemap, if they are connected by adequate internal links.
Despite this, a Sitemap creation is very important because it makes it easier for the bots to scan more efficiently, above all when:

  • the website is very big (e.g. an e-commerce website)
  • the website is new ad so it gets a few entry links
  • the website does not have a correct link structure between its pages

Under these circumstances Sitemap becomes paramount to let web search engines trace and find all the pages to scan and be indexed, preventing the chance of losing a few of them while crawling is underway.
A lot of tools available online can be used to create a Sitemap, like Google Sitemap Generator, or exploiting plugins like Seo YOAST in case of WordPress usage. Once it has seen light, the file robots.txt can be loaded in Google Search Console.

File Robots.txt

This file is a text file to upload in the main directory of your domain. Its purpose is to send web search engines a series of indications about what pages need to be scanned and what not.
These indications have to follow preset commands strictly, according to the will of giving instructions to all web search engines or just to some of them (e.g. Google, Bing and so on).
Every command has to refer to one or to all web search engines (User-agents). Standard commands that can be used in the robots.txt file are:

  • Not Allow
  • Allow
  • Sitemap
  • Scan Delay

Example:
User agent:*
Disallow:/privacy/

In this case every web search engine is given the order not to scan the “privacy” folder.

Example:
User agent: Googlebot
Disallow:/privacy/

In this case only Google is given the order not to scan the “privacy” folder.

Every website, in particular if big, should create and exploit the robots.txt file to upload its own Sitemap and, subsequently, to avoid one or more pages from the scanning made by web search engines.

Canonical Tag

Canonical Tag is a tag used to avoid problems of web pages eating each other. Its purpose is to indicate to web search engines the “official” version of a web page, whether there were more than one copy of it on a website.
A web page accessible from more than one URL or different pages having the same content will be considered by web search engines as duplicated versions of one page.
If a duplicated page were not recognized as a Canonical Url, Google determines automatically what canonical version has to be scanned, considering other duplicate urls as urls to scan less frequently.
The Canonical Tag Rel has to be placed in the <head> section of the HTML web page.
Example:
<rel link=”canonical” href=https://sitodipantaloni.it/jeans-slim-fit-blu/” />
Setting Canonical tags is very important because it is the way to tell a web search engine what the “main” page needing scanning is, which one needs to be considered the leading one, and therefore to receive keywords associations and positioning priority over other duplicated pages too.
It can also happen that Google may consider one or more duplicated pages as completely identical pages. In other words, producing an extremely high number of pages and causing problems in terms of Crawl budget, ranking positioning or penalties too.

Scan Budget

Above all when it comes to big websites, like for example e-commerce websites, a key factor is the so-called Crawl Budget, a value that GoogleBot favors to scan the pages of a website.
Two aspects to keep into consideration to evaluate Crawl budget are:

  • Daily scanned pages rate
  • Time elapsed for each scanning

The higher the number of pages the crawler scans on a daily basis and in the shortest time possible, the better the evaluation and the reputation of the website from a web search engine point of view.
Monitoring the Crawl Budget by the Search Console panel as time goes by, and checking if any page which should not be scanned because of their lack of importance when it comes to indexing, are paramount activities.
In the last case, the web page Disallow function helps enhance the crawler performance, not wasting time scanning it again.

Performance and Mobile

Further aspects to be taken into consideration about On-Site SEO Optimisation are undoubtedly the website navigation speed rate, website loading, security and user experience on mobile devices.
Using the Https protocol (Hypertext Transfer Protocol Secure) instead of the Http protocol is very important for a website, in order to ensure a safe communication between a browser and its server, thanks to a SSL/TLS cryptography system.
Optimising a website navigation speed is extremely important as well, both from the web search engine and from the user’s side, because a slow website will inevitably cause its abandonment, increasing its bounce rate.
There are a lot aspects possibly to work on, in order to enhance the performances of a website:

  • Image and Media Size and Weight
  • Server Output Time
  • Catching and CDN System
  • Css, JS and HTML Minification
  • Third Party Requests
  • Further Aspects

Google PageSpeed Insights is fundamental to test the speed of a website, finding out potential complications and chances which can help make the loading of pages quicker.
Information obtained from speed tests are extremely useful to get to know the
Mobile scores too.
According to the increasing number of searches, On-Site SEO needs to take into consideration all the aspects regarding the optimisation of mobile devices in terms of performance and usability.
Thanks to the Google Mobile Optimisation Test Instrument it is possible to check if a web page is optimised for mobile devices. In order to analyse the learning level of a website it is enough to use Google Search Console instead, to examine interactions with pages, find out potential problems and usability errors.

Mirco










SEO On page: Ottimizzare i contenuti di un sito web

La SEO Onpage è l’attività che racchiude le tecniche di ottimizzazione che impattano sui singoli contenuti di un sito.

È molto importante quindi focalizzarsi sui principali fattori di ranking presenti in ciascuna pagina, per migliorare il posizionamento nelle SERP.

Le tecniche di ottimizzazione implementate sulle singole pagine del sito devono perseguire un duplice obiettivo: 

  • Fare in modo che il motore di ricerca comprenda esattamente per quale Search Intent la pagina è pertinente e per quali query deve posizionarsi.

  • Fare in modo che l’utente abbia una buona esperienza sulla pagina selezionata dalla ricerca.

 

 

Snippet e Rich Snippet

I risultati della corretta ottimizzazione dei diversi fattori Onpage all’interno delle pagine del sito sono visibili nello Snippet, il box che contiene il risultato di una ricerca nella SERP.

Uno Snippet è composto da:

  • Url della pagina
  • Title
  • Description

Gli Snippet che contengono anche altre informazioni aggiuntive vengono chiamati Rich Snippet.
Questi Possono contenere:

  • Recensioni
  • Mappe
  • Altre informazioni

Gli elementi aggiuntivi vengono chiamati Sitelink, ovvero dei link che rimandano a pagine specifiche del sito.

Esistono poi diverse tipologie di Snippet:

  • Knowledge graph snippet
  • Featured snippet
  • Elenco numerato
  • Elenco puntato
  • Tabella
  • Video
  • Ecc

La regola generale stabilisce che non è possibile “forzare” gli snippet per un sito web, in quanto è Google a stabilire quando e come possono essere mostrati.
La SEO però può lavorare all’ottimizzazione della pagina per favorire l’inserimento dei sitelink da parte del motore di ricerca.
Anche se questi non rappresentano un fattore di ranking, possono contribuire ad aumentare il Ctr.

 

Keyword research e Search intent

Il primo punto da cui partire per una corretta strategia SEO riguarda la definizione del topic e la comprensione del Search intent.
Da qui l’analisi deve essere declinata nella ricerca delle parole chiave che andranno a ricoprire quel determinato intento.

Solitamente quindi la ricerca parte da una kw generica e va ad allungare sempre di più la coda, comprendendo anche quelle chiavi che presentano bassi volumi di ricerca.
In questa fase è molto utile utilizzare dei tool specifici per l’analisi e per la ricerca delle parole chiave, nonché sfruttare le correlate mostrate da Google direttamente nella SERP.

L’analisi delle SERP aiutano a fornire un set di informazioni importanti per conoscere l’intenzione di ricerca degli utenti, comprendere quali contenuti utilizzare per risultare autorevoli per una determinata query e per definire il numero di keyword/query che rispondono a uno stesso Intent.

 

Tag Title

È il tag Html più importante per l’ottimizzazione di uno specifico contenuto. 

Il suo ruolo è quello di specificare il titolo di una pagina web e dunque comunicare a primo impatto l’argomento principale.

Appare come link (in blu) nella SERP in alto a sinistra.

Per l’ottimizzazione è bene sfruttare la keyword principale del contenuto, una correlata o una combinazione delle query principali.

Il Tag Title deve avere una lunghezza massima di 60 caratteri.

 

Meta Description

Da qualche anno la meta description non rappresenta più un fattore di Ranking, ma può essere molto utile per aumentare il Ctr.

Il suo obiettivo è indirizzare, convincere e attrarre l’utente attraverso una breve frase a cliccare sul sito.

Ad esempio per gli E-commerce la meta description può essere sfruttata per comunicare delle leve potenzialmente vantaggiose per l’utente: spedizione in 24 ore, spedizione gratuita, sconto del 10%, ecc.

La Lunghezza massima deve essere di 160 caratteri

 

Slug

In WordPress, lo slug è la parte di testo che appare dopo il nome di dominio nell’URL della pagina.
In sostanza, è la parte dell’URL del sito che identifica una singola pagina.

In chiave SEO è bene generare uno slug che sfrutti la keyword principale, eventualmente una correlata o una combinazione delle query principali.
Generalmente si può utilizzare il titolo impostato per un determinato contenuto.

Esempio:

https://news.blubit.com/seo-on-site-azioni-e-tecniche-per-lottimizzazione-del-sito/

 

SEO Yoast

Per i siti web realizzati su WordPress è possibile ottimizzare i principali fattori On Page utilizzando direttamente un unico Plugin.

SEO Yoast rientra nella cassetta degli attrezzi di qualunque SEO Specialist, poiché da la possibilità di inserire per ogni pagina, articolo o prodotto, la keyword principale del contenuto, il tag title, la meta description e lo slug

seo yoast

La relazione tra questi fattori e il contenuto della pagina mostrerà la bontà dell’ottimizzazione SEO, attraverso un indicatore (semaforo) con diversi colori.

  • Verde: Buon risultato
  • Giallo: Da migliorare
  • Rosso: Problemi

È bene precisare che non è sempre fondamentale considerare i colori e rispettare i suggerimenti emessi dal plugin.
Ciò che importa è la pertinenza di tutti i fattori on page e la qualità del contenuto.

 

Contenuto

Il testo di un contenuto rappresenta uno dei principali fattori di Ranking in assoluto.

Le keyword vanno inserite nel testo in modo totalmente spontaneo, considerato che la cosiddetta “keyword density” non esiste.

La parte più importante di un contenuto si trova nell’ Above the fold, ovvero la parte che viene visualizzata prima dello scroll su una pagina.

In questa parte quindi è utile far trovare al motore di ricerca:

  • L’H1
  • Le query principali da far posizionare
  • Introduzione all’argomento
  • Link interni a contenuti correlati
  • Table of content

I contenuti devono essere univoci.

Quindi ogni pagina, ogni articolo, deve avere un contenuto che risponda alle query che compongono l’intenzione di ricerca degli utenti identificata, in modo tale che possa essere messo in indice, rankare e dunque portare traffico.

Non esiste una lunghezza ottimale per un contenuto, solitamente è bene scrivere un testo che abbia almeno 700 parole.
In ogni caso la lunghezza dipende dal search intent, e dal tipo di argomento che si vuole trattare. 

Anche per gli E-commerce è molto importante il contenuto della pagina, e che sia legato al title e alla description inserito.
Pertanto è buona regola inserire porzioni di testo anche nelle categorie e nelle pagine prodotto, poiché la totale assenza non viene vista di buon occhio dai motori di ricerca.

 

Heading Tag

Hanno il ruolo di organizzare la priorità dei contenuti e di fornire sia al motore di ricerca che all’utente un’indicazione dell’argomento del paragrafo in cui si trovano.

Gli Heading Tag vanno da H1 a H6 e seguono un ordine di importanza in termine di impaginazione a scalata.

Non bisogna caricarli di parole chiave (stuffing), ma semplicemente usarli per fornire un buon assetto grafico: titolo, paragrafo, sottoparagrafo, ecc.

Attenzione a non confondere l’H1 con il Title. 

 

Immagini 

È molto importante ottimizzare le immagini perché aiutano a comprendere il significato globale (topic) del contenuto. Inoltre hanno la possibilità di rankare nella SERP delle immagini e concorrere ad aumentare il traffico della pagina.

I principali elementi da ottimizzare sono:

  • Peso: (< 200 Kb) poiché possono appesantire e dunque rallentare il sito. (Usare il formato jpeg o png e comprimerle)

  • Filename: il nome del file dell’immagine (dovrebbe contenere la keyword principale)

  • Tag ALT: il Testo alternativo (deve specificare il significato dell’immagine e contenere una delle keyword utilizzate)

 

Mirco 

      Mirco Cilli

 Facebook and Instagram: Like…. Whether You Like It Or Not!

Optimising every aspect concerning On-site SEO is a fundamental activity that needs to be developed in order to get good positioning on web search engines.
It goes without saying that a deep detailed knowledge of the most important techniques and aspects to develop to make a website grow is paramount.

Content Index

  • On-Site SEO - Definition
  • On-Site SEO - Objective
  • On-Site SEO - Main Actions
  • Internal Structure and Links
  • Navigation Path
  • Site Map
  • The Robots.txt File
  • Canonical Tag
  • Scan Budget
  • Performance and Mobile

On-Site SEO - Definition

On-Site SEO is the activity that includes all the technical work that has a global impact on a website, allowing better positioning on web search engines.

On-Site SEO - Objective

The goal of every activity developed by On-Site SEO is to organise and provide the best site structure possible. From an operational point of view, this means to depict a quality and quantity overview on the information placed in every web page of a website.
After the building of the website structure, and after the mapping of the accesses, SEO works strategically by searching keywords to flush out the search intents of the users, bringing them to light with the highest number of keywords possible.

On-Site SEO - Main Actions

When it comes to SEO optimisation, every website layout has to be thought up and designed on the base of a hierarchical structure (tree-structure), in order to make web surfing as simple and functional as possible.
Once the Macro and Micro hierarchical structure of the pages in the website is identified, it is possible to start working on a series of technical factors linked to the On-Site SEO activity.

Internal Structure and Links
Always keeping in mind the tree-structure layout a website needs to be built on, it is paramount to work on the creation of a good structure of internal links.
This leads to the achieving of two specific goals:

  • Users: it enhances the navigation and the experience on each page, making it easier to surf the website between different categories and subcategories
  • Web search engines: it makes easier to understand the structure of a website, helping to understand the web page topics too, thanks to the adding of an anchor text, and fuelling the Link Juice (value) transfer activity from the starting page to the destination page

Navigation Path

Aside from the internal links, Breadcrumbs are fundamental to define the hierarchical structure of a website as well.
They are HTML text paths (links) allowing users to check their position on a web page according to the starting point (most of the time home pages).

Breadcrumbs aim at leading users while surfing the website, guiding them among categories and subcategories.
They increase the value of the user experience and they enhance the positioning of a website too, because a clear indication to web search bots, related to the hierarchical structure of a website, helps create context and classify different contents of each web page.

Example
Site.it/category > Breadcrumb: Homepage > Category Name
Or
Site.it/category/subcategory > Breadcrumb: Homepage > Category Name > Subcategory

When it comes to positioning both the examples are correct. The main thing that matters is that the web page hierarchy is respected.
Implementing Breadcrumbs requires adding a specific Markup to the site web pages, using the 
schema.org dictionary definitions correctly. Decoding can be manual or made by using specific plugins that support the use of structured data.

Site Map

The website architecture and structure described can be used by web search engine crawlers to scan and index the contents of a web page.
Hierarchy and Content organisation are collected in a Sitemap, mostly .xml files helping the understanding of contents placed in a website.
It is appropriate to underline the fact that web search engines can find contents on a website even without creating a Sitemap, if they are connected by adequate internal links.
Despite this, a Sitemap creation is very important because it makes it easier for the bots to scan more efficiently, above all when:

  • the website is very big (e.g. an e-commerce website)
  • the website is new ad so it gets a few entry links
  • the website does not have a correct link structure between its pages

Under these circumstances Sitemap becomes paramount to let web search engines trace and find all the pages to scan and be indexed, preventing the chance of losing a few of them while crawling is underway.
A lot of tools available online can be used to create a Sitemap, like Google Sitemap Generator, or exploiting plugins like Seo YOAST in case of WordPress usage. Once it has seen light, the file robots.txt can be loaded in Google Search Console.

File Robots.txt

This file is a text file to upload in the main directory of your domain. Its purpose is to send web search engines a series of indications about what pages need to be scanned and what not.
These indications have to follow preset commands strictly, according to the will of giving instructions to all web search engines or just to some of them (e.g. Google, Bing and so on).
Every command has to refer to one or to all web search engines (User-agents). Standard commands that can be used in the robots.txt file are:

  • Not Allow
  • Allow
  • Sitemap
  • Scan Delay

Example:
User agent:*
Disallow:/privacy/

In this case every web search engine is given the order not to scan the “privacy” folder.

Example:
User agent: Googlebot
Disallow:/privacy/

In this case only Google is given the order not to scan the “privacy” folder.

Every website, in particular if big, should create and exploit the robots.txt file to upload its own Sitemap and, subsequently, to avoid one or more pages from the scanning made by web search engines.

Canonical Tag

Canonical Tag is a tag used to avoid problems of web pages eating each other. Its purpose is to indicate to web search engines the “official” version of a web page, whether there were more than one copy of it on a website.
A web page accessible from more than one URL or different pages having the same content will be considered by web search engines as duplicated versions of one page.
If a duplicated page were not recognized as a Canonical Url, Google determines automatically what canonical version has to be scanned, considering other duplicate urls as urls to scan less frequently.
The Canonical Tag Rel has to be placed in the <head> section of the HTML web page.
Example:
<rel link=”canonical” href=https://sitodipantaloni.it/jeans-slim-fit-blu/” />
Setting Canonical tags is very important because it is the way to tell a web search engine what the “main” page needing scanning is, which one needs to be considered the leading one, and therefore to receive keywords associations and positioning priority over other duplicated pages too.
It can also happen that Google may consider one or more duplicated pages as completely identical pages. In other words, producing an extremely high number of pages and causing problems in terms of Crawl budget, ranking positioning or penalties too.

Scan Budget

Above all when it comes to big websites, like for example e-commerce websites, a key factor is the so-called Crawl Budget, a value that GoogleBot favors to scan the pages of a website.
Two aspects to keep into consideration to evaluate Crawl budget are:

  • Daily scanned pages rate
  • Time elapsed for each scanning

The higher the number of pages the crawler scans on a daily basis and in the shortest time possible, the better the evaluation and the reputation of the website from a web search engine point of view.
Monitoring the Crawl Budget by the Search Console panel as time goes by, and checking if any page which should not be scanned because of their lack of importance when it comes to indexing, are paramount activities.
In the last case, the web page Disallow function helps enhance the crawler performance, not wasting time scanning it again.

Performance and Mobile

Further aspects to be taken into consideration about On-Site SEO Optimisation are undoubtedly the website navigation speed rate, website loading, security and user experience on mobile devices.
Using the Https protocol (Hypertext Transfer Protocol Secure) instead of the Http protocol is very important for a website, in order to ensure a safe communication between a browser and its server, thanks to a SSL/TLS cryptography system.
Optimising a website navigation speed is extremely important as well, both from the web search engine and from the user’s side, because a slow website will inevitably cause its abandonment, increasing its bounce rate.
There are a lot aspects possibly to work on, in order to enhance the performances of a website:

  • Image and Media Size and Weight
  • Server Output Time
  • Catching and CDN System
  • Css, JS and HTML Minification
  • Third Party Requests
  • Further Aspects

Google PageSpeed Insights is fundamental to test the speed of a website, finding out potential complications and chances which can help make the loading of pages quicker.
Information obtained from speed tests are extremely useful to get to know the 
Mobile scores too.
According to the increasing number of searches, On-Site SEO needs to take into consideration all the aspects regarding the optimisation of mobile devices in terms of performance and usability.
Thanks to the Google Mobile Optimisation Test Instrument it is possible to check if a web page is optimised for mobile devices. In order to analyse the learning level of a website it is enough to use Google Search Console instead, to examine interactions with pages, find out potential problems and usability errors.

Mirco

Articoli Collegati

Seo

Link Building: tecniche e strategie per la generazione di backlink

Leggi
seo off site

Seo

SEO Off site: Ottimizzare un sito web dall’esterno

Leggi
seo on site

Seo

SEO On Site : Azioni e tecniche per l’ottimizzazione del sito

Leggi