Google for Jobs for Employers: How to Make Hiring Easier and More Efficient

Why are recruiters struggling to make the most of Google for Jobs? 

Employment data are everywhere. To stay competitive in today’s recruitment market, employers need to focus on developing their career sites to ensure they provide a good application experience to job seekers. The sheer volume of job boards on the market (like Monster, Indeed, and Glassdoor) confuse candidates.

Where to start? What job seekers really want is a simple, quick and easy way to find relevant jobs. This was exactly what Google envisioned when they introduced Google for Jobs (GFJ). GFJ is a one-stop shop where job seekers use advanced search tools to find the positions that closely match their preferences.

The search giant has positioned itself to dominate online recruitment. However, the platform is not without its issues: recruiting teams are finding it difficult to get their jobs posted and ranked highly. Google for Jobs may be friendly for job seekers, but Google for Jobs for employers can be complex and difficult to understand, and optimize for with the many Google for Jobs ranking signals. This is because Google for Jobs’ structured data (aka, schema) adds a level of complexity.

Before diving into the details, let’s look at what makes GFJ so special: 

What is Google for Jobs?

Google for Jobs is a search platform that aggregates job postings from job boards and career sites all over the web. A platform that uses Google’s vast and powerful search tools (like location, company info, and reviews) seems like a no-brainer for recruiters. 

Unfortunately, getting jobs up and ranked highly on GFJ isn’t as easy as one might hope. Google for Jobs for employers and recruiters is far different than the simple process of posting jobs on job boards. Google requires a detailed and complex job-posting Google schema to be added to the HTML of job pages. This means recruiters need technical expertise just to get their jobs up on Google. 

So what exactly does this whole ‘schema’ thing entail? 

In general, structured data on the web is any data that can be organized or given structure. When it comes to posting on GFJ, things are no different. To take advantage of Google for Jobs, job pages need to have structured data, most notably elements and attributes about the jobs. Many of these elements and attributes are required (e.g., the title, description and location of the job, the company, and the posting/expiration dates). Others, however, are optional (e.g., base salary, the employment-type etc.). For more details about these various attributes, you can explore some of these job advertisements worth stealing.

This image provides a glimpse into what is required for Googles for Jobs’ structured data

Google for Jobs' structured data requirements

Looks scary!

Structured data for Google for Jobs is Google’s way of ensuring that all the details that job seekers want to see show up. But this comes with a cost. Employers are struggling to create this complex schema because recruiters rarely have the knowledge or technical resources required. They can’t make sense of Google’s HTML and JSON requirements by themselves. This is just the tip of the iceberg: because the code must be created and embedded on each job page individually, the process is very time consuming. 


Get a detailed guide on Google Schema and Google for Jobs here.


Why is adding structured data to job postings so difficult?

Making sure that every single one of your job postings complies with a specified schema is a headache. Trying to do this manually is time consuming and requires technical knowledge that most recruiters don’t have.  

Automation is possible, but not easy

While it’s possible to automate the whole process, it’s not easy due to the unstructured nature of job posts. Virtually every job differs in form, structure, and content: 

  1. Job postings usually don’t follow the same layout. This makes it difficult to segment a jobs page, and even more challenging to identify specific types of data within the posts.
  2. Posting structure varies from one position to another. Job information, even within the same organization, isn’t always presented in the same way.
  3. The writing process itself is very subjective. Job posts vary from recruiter to recruiter, and depend on one’s writing style and vocabulary. Not to mention differences rooted in language and culture. For example, recruiters often use different words to designate the same job (e.g., software developer, full-stack developer, and ninja developer are all different titles denoting the same job).

Dealing with such diverse data structures, forms and content is problematic. Traditional algorithms are unreliable. When the complexity of vocabulary and structure used in job postings get’s added to the mix, it’s even worse.

What about Artificial Intelligence? 

We’re glad you asked. Artificial Intelligence (AI) is revolutionizing job posting performance. Deep Learning and Machine Learning (ML) algorithms are often developed to automatically extract information by discovering patterns from a set of documents. However, the ML process is lengthy and complex. It requires: 

Data Collection and Labeling

  • The first thing that makes ML difficult is rooted in the data sets used to train models. The larger the sample size used for training, the better the algorithms’ performance. 
  • The second difficulty is data labeling. Supervised learning is the most commonly applied AI technique when a model is getting trained on a labeled dataset. In the real world, however, labeled data is usually hard to obtain. This means that manual annotation is required, which is a time-consuming process that requires thousands of human judgments by domain experts.

Feature Engineering

  • An ML model can’t learn properly with only raw data. Data has to be refined into “features” to be useful for training a model (known as “feature engineering”). 
  • This process is also time-consuming, and requires expert knowledge. It must address a variety of issues: extracting representative features, cleaning and pre-processing data, and transforming data into a form that best fits the model for optimal benefit. 

Choosing the Right ML Algorithms

  • Once data is collected and labeled accurately, a learning model must be built. This is a crucial stage in the ML process. It’s important to find strong models, with the right parameters, that are capable of producing accurate tag predictions.
  • ML algorithms can’t perform well without adding external, real-world knowledge to better understand the context of information in a job post page. For example, ”Java” can be identified as a region OR a programming language in a developer’s job description.

All of these factors make the tag prediction a complex problem. Many companies have tried to solve it. Handling all of these job posts, from all domains and organizations on a global scale, is daunting.


Our Ultimate Guide to Google for Jobs tells you everything you need to know about the platform. Check it out here.


That’s Where Jobiak Comes in 

Jobiak is the only platform to date that is able to automatically scan job posts and identify the attributes that Google requires. In real-time. Once Google tags are extracted, Jobiak’s tool structures them appropriately. This is all done without any human intervention. 

Jobiak’s workforce is comprised of 100+ engineers who provide code, studies, labels and feedback tests to continually refine the platform.  This is the industry’s first AI-based search and social media recruitment platform that quickly and directly publishes job postings to GFJ and other search and social media platforms. The result for employers using Jobiak? More job views, more qualified candidates, higher conversion rates, simplified direct apply features, and a lower cost and time to hire. Google for Jobs for employers gets a whole lot easier — and more effective.

recruiting strategy
sourcing strategy