Monday 6 July 2020

Robots.txt file in AEM websites


When we think about AEM websites, SEO is one of the major consideration. To ensure the crawlers are crawling our website, we need to have sitemap.xml and a robots.txt which redirects the crawler to corresponding sitemap.xml

A robots.txt file lives at the root folder of the website. Below given the role of a robots.txt in any website. Robots.txt file acts as an entry point to any website and ensure the crawlers are accessing only the relevent items whcihwe have defined.

Click on image to see it big


robots.txt in AEM websites

Let us see how we can implement a robots.txt file in our AEM website. There are many ways to do this, but below is one of the easiest way to achieve the implementation.

Say we have multiple websites(multi-lingual) with language roots /en, /fr, /gb, /in

Let us see how we can enable robots.txt in our case.

Add robots.txt in Author

Login to the crxde and create a file called 'robots.txt' under path /content/dam/[sitename]
Ensure the following lines are added to the 'robots.txt' in Author of AEM instance and publish the robots.txt

#Any search crawler can crawl our site
User-agent: *

#Allow only below mentioned paths
Allow: /en/
Allow: /fr/
Allow: /gb/
Allow: /in/
#Disallow everything else
Disallow: /

#Crawl all sitemaps mentioned below
Sitemap: https://[sitename]/en/sitemap.xml
Sitemap: https://[sitename]/fr/sitemap.xml
Sitemap: https://[sitename]/gb/sitemap.xml
Sitemap: https://[sitename]/in/sitemap.xml

Now publish the robots.txt

Add OSGi configurations for url mapping

Now add below entry in OSGI console> configMgr  - 'Apache Sling Resource Resolver Factory'

Add below mapping for section 'URL Mappings'
/content/dam/sitename/robots.txt>/robots.txt$

Add rewrite rule/ allow access to  robots.txt via dispatcher
And allow the crawlers to access robots.txt via the dispatcher

Add allow rule for robots.txt in dispatcher
/0010 { /type "allow"  /url "/robots.txt"}

When you hit the www.[sitename]/robots.txt you should see the robots.txt file on public domain.

Now any search engine which tries to access our site will find the robots.txt and recognises, whether the crawler has got permission to crawl the site and what areas of the site has got crawl access.

Some sample usage of robots.txt is given below


# Disallow googlebot accessing example.com/directory1/... and example.com/directory2/...
# but allow access to subdirectories -> directory2/subdirectory1/...
# All other directories on the site are allowed by default.
User-agent: googlebot
Disallow: /directory1/
Disallow: /directory2/
Allow: /directory2/subdirectory1/

# Block the entire site from xyzcrawler.
User-agent: xyzcrawler
Disallow: /


Let me know if you find a better way to do this; via comments section.

Sunday 5 July 2020

Path changes while upgrading from AEM 6.3 to AEM 6.5

Before we start any AEM upgrades we should ensure that a detailed study is done on the release notes.
If the upgrades are planned to the next direct version (Say AEM 6.4 to AEM 6.5), We can just read the release notes of AEM 6.5 and proceed for the upgrade. But if the case is different (AEM 6.3 to AEM 6.5) ensure we are comparing the release notes for each versions.

For eg: Say we are upgrading from AEM 6.3 to AEM 6.5. We know there was an AEM 6.4 available. So while upgrade, first understand the release notes of AEM 6.4 and observe the changes between AEM 6.3 to AEM 6.4 and do the same comparison from AEM 6.4 to AEM 6.5. This process ensure that we are identifying every changes and accommodating all changes by taking precaution not to break anything during upgrades.

Approach
AEM 6.3 -> AEM 6.4(Release Notes) -> AEM 6.5 (Release Notes)

AEM 6.3 to AEM 6.5 Path Changes

Notes:  AEM content is being restructured out of /etc to other folders in the repository, along with guidelines on what content goes where, adhering to the following high-level rules:
•    AEM product code will always be placed in /libs, which must not be overwritten by custom code
•    Custom code should be placed in /apps, /content, and /conf


Old Path

New Path

/etc/workflow/models

/libs/settings/workflow/models

 

/conf/global/settings/workflow/models

 

/var/workflow/models

/etc/workflow/instances

/var/workflow/instances

/etc/workflow/launcher/config

/libs/settings/workflow/launcher/config

 

/conf/global/settings/workflow/launcher/config

/etc/workflow/scripts

/libs/workflow/scripts

 

/apps/workflow/scripts

/etc/designs/default

/libs/settings/wcm/designs/default

 

/apps/settings/wcm/designs/default

/etc/taskmanagement

/var/taskmanagement

/etc/tags

/content/cq:tags

/etc/notification/email/default/com.day.cq.replication

/libs/settings/notification-templates/com.day.cq.replication

 

/apps/settings/notification-templates/com.day.cq.replication

/etc/workflow/notification

/libs/settings/workflow/notification

 

/conf/global/settings/workflow/notification

/etc/workflow/packages

/var/workflow/packages

Hide /content root path in publish and website domains

How do we hide the '/content/sitename' from the URL?


It is always good to hide the content path from the public domain  or publish server urls. Let us see one of the best approach for the same.

Say we have our website in below domain URL,

https://[websitename.com]/content/sitename/fr/home.html

And hosted AEM with below URL,

http://<hostname>:<port>/content/sitename/fr/home.html

As a best practice and recommended option, we have to ensure the '/content/sitename' is hidden from appearing on the public domain.

Here I am going to explain one of the best approach for achieving the same. To achieve this, we will have to configure things on both publish and dispatcher.

Configurations on PUBLISH:

Configuring the Apache Sling Resource Resolver Factory to ensure the URLs are re-written at PUBLISH server.

Apache Sling Resource Resolver Factory - add below configurations under section - 'URL Mapping'

/content/sitename/(.*)</$1
/content/sitename/fr$1>/fr(.*)

One you save the configuration, and hit the webpage with http://<hostname>:<port>/fr/home.html, you will be able to see the home page over publish instance(without /content/sitename).

Note:After saving, it takes some time to auto-restart the relevant bundles.

Configurations on DISPATCHER:
Now we are able to hit the publish server without content path. Now let us see how this can be achieved over dispatcher or the publish domain URL.

Step1: Ensure the 'mod_rewrite' module is loaded in apache.
#
# This file loads most of the modules included with the Apache HTTP
#
LoadModule rewrite_module modules/mod_rewrite.so

Step 2: To do this set 'DispatcherUseProcessedURL' property to 1 - This will ensure the dispatcher using processed URLS .
<IfModule disp_apache.c>
        # This is enabled to ensure re-writes taking effect
        DispatcherUseProcessedURL    1     
</IfModule>

Step 3: Now update the virtual host file and add the below rules in it.(Usually this configuration file sits in the conf.d module)

<IfModule mod_rewrite.c>

RewriteEngine on
 
RewriteRule ^(.+)/$ $1
 
#shorten the URL
RewriteRule ^/content/sitename/(.*).html$ $1.html [R,L]
#Redirect the fr to the root folder to the home page
RewriteRule ^/?$ fr/home.html [R,L]

Now after restarting the Apache and hit the public domain URL
https://[websitename.com]/fr/home.html, you can see the home page is loading.

Monday 29 June 2020

Experience with Adobe-AEM Certification


Recently I had appeared for AEM Certification exam and thought I will share my experience with you all.

How did I register for exam?

I went through the AEM certification site and registered my-self choosing a date - A convenient date when I can make myself completely free.

- There were two options, PSI and Examity. I have chosen one.

- Due to the COVID lock-down, majority of the exams are happening online

Procedures on exam day.

The exam notification email said, I can login to the exam system half an hour before. Since this was the first experience, I logged into the system 1 hour before.

System Checks: The exam site asked me to install a secure browser, once installed they do check system requirements. They do a set of checks like system resources, camera, internet speed, browser used etc.

Check-in Personal Information: Once that is done, I was asked to take a pic of 1) Myself 2) my government issued ID 3) and a video scan of my room by rotating laptop around, including the desk where laptop was placed.

Once those files are checked into the system, I was asked to wait for the scheduled time.

My recommendation here:
If you are confident enough about your system, internet etc, login just half an hour before the exam - else you will have to wait a lot.

You can even disconnect and login back, but since camera was on, i did not attempt that.

Waiting for proctor
Now my scheduled time came. I had scheduled exam at 10 AM. But I was still getting message like 'Your exam will start once proctor joins on the scheduled time'. It went on for 10 minutes. I saw an option to chat with the executive. I pinged executive via the chat option. Even to connect the customer care executive - it took some time.

The executive told me , he can reschedule, but the replies were too late, so I was worried about the confirmation.

Check-in Expert Verifying my details:
Fortunately my on screen message changed to - 'Check-in verification expert is analysing details'. So I have asked the chat agent to hold on from rescheduling.

The verification agent(proctor) told to re-take the ID proof photo again - which was not clear according to him. I have done that and re-uploaded.

Starting the exam now:
After waiting for few minutes, My screen changed to "starting with exam". Then the proctor started sending me messages.

Proctor asked to scan the room again. rotating 4 sides of room, (He saw my ID card was on my desk/table - pinged me to remove it) once that was done, the proctor shared the terms and condition and then started the exam.

Notes:
Even though exam was scheduled at a specific time, the exam started quite late after all these procedures. This means, same process is carried out for all persons who are taking exam in parall and this is the reason the proctor may not be able to start our exam on scheduled time. So I personally ask every one who takes the test to have patience and wait till the procedures are completed before taking exam.

My suggestions
- Ensure un-interrupted internet, power connectivity
- Ensure its a peaceful space where no one disturbs you.

I will be providing more tips for the AEM certification via my YouTube channel - Link is provided on right side of the webpage.

Saturday 13 June 2020

Configure the https(SSL) on AEM instance quickly


There are cases where during development we may need to setup https connection in our existing AEM instance.

By following procedure we can have both http and https on same AEM instance. This is very helpful while testing some of the AEM features which require SSL connections.

To start with, we need keys and certificates to configure SSL on AEM. We will use OpenSSL to set up keys and certificates. The method is tested on window, but should work on any other OS seamless way.

How to setup OpenSSL on Windows

  • Download OpenSSL from any URL - Ensure its relevant to your OS (including 86 Vs 64 Bit)
  • Unzip it.
  • Set the classpath


  • place the conf file in below path (Else you may get an error that openSSL conf cannot be found)

Now the OpenSSL is configured on your windows
  • Using command prompt execute below commands

### Create Private Key
$ openssl genrsa -aes256 -out localhostprivate.key 4096

### Generate Certificate Signing Request using private key
$ openssl req -sha256 -new -key localhostprivate.key -out localhost.csr -subj "/CN=localhost"

### Generate the SSL certificate and sign with the private key, will expire one year from now
$ openssl x509 -req -days 365 -in localhost.csr -signkey localhostprivate.key -out localhost.crt

### Convert Private Key to DER format - SSL wizard requires key to be in DER format
$ openssl pkcs8 -topk8 -inform PEM -outform DER -in localhostprivate.key -out localhostprivate.der -nocrypt

You will have the certificates now in local drive as shown below.




Use the SSL Wizard in AEM

Now login to AEM
http://localhost:4502/aem/start.html

Tools > Security > SSL Configuration

For store credentials provide the Key store and Trust store password. [I have used admin for all, since its a localhost]

Monday 6 April 2020

Common security vulnerabilities identified as part of AEM projects

When ever an AEM project goes Live, there are set of scans happens to ensure that the website adheres to set of security & performance guidelines.
The security/ penetration tests usually gets scheduled few days ahead of any AEM go live. Below given the set of issues identified as part of AEM websites normally.



Horizontal Privilege Escalation Vulnerability   

Usually by horizontal privilege escalation, hackers remain on the same general user privilege level but gains access data of other accounts or processes that should be unavailable to the current account or process.

Host Header Injection Vulnerability   

Normally a header is used by a web server to decide which website should process the received HTTP request. Whenever many websites are hosted on the same IP address, webserver uses the value of this header to forward the HTTP request to the correct website for processing. This poses as a vulnerability.

Email Flooding Attack   

In general, sending large volumes of email to an email address so that the mail box gets overflowed, overwhelm the server where the email address is hosted in a denial-of-service attack. Thus a wrong impression screen to distract the attention from important email messages indicating a security breach.

HTML Injection Vulnerability   


HTML injection generally occurs when the vulnerability inside any website that occurs when the user input is not correctly sanitized or the output is not encoded and the attacker is able to inject valid HTML code into a vulnerable web page. If these methods are provided with un trusted input, then there is a high risk of XSS, specifically an HTML injection issues. If strings are not correctly sanitized the problem could lead to XSS based HTML injection.

Session Replay Attack   

This kind of attacks, known as playback attacks or replay attacks, are network attacks that maliciously repeat or delay a valid data transmission. A hacker can do this by intercepting a session and stealing a user’s unique session ID. Now, the hacker is able to behave himself or herself as an authorized user on site, and will be granted full access to do anything that the authorized user can do on a website.

Stored XSS via File Upload Vulnerability

    Such scripts are possibly vulnerable to XSS (Cross-site scripting). The web application allows file upload of any type & was able to upload a file containing HTML content or various file extensions. When HTML files are allowed, XSS payload can be injected in the file uploaded.

Web Server Banner Disclosure   

When we are running a web server, it often shows the others what type of server it is, its version number, and the operating system. This information is available in header fields and can be acquired using a web browser to make a simple HTTP request to any web application. It is often called the web server banner

Concurrent Logins Allowed    

Parallel logins. Interactive logins at desktops and laptops, a system administrator cannot therefore prevent a given user from going up to one computer, logging on there, letting somebody work as him or just leaving the computer unattended, and then walking up to another computer and logging on there. This causes data leak.

Email Harvesting   

A process of obtaining large number of e-mail addresses through various online sources like website hacking. They obtain list of emails, either by purchase or theft, of valid email address for the purpose of sending bulk emails or Spam.

Vulnerable JQuery version in use   
Old version of Jquerys causes a threat to the websites.

Content Spoofing Vulnerability  

Content Spoofing or Content Injection is one of the common web security vulnerability. It allows end user of the vulnerable web application to spoof or modify the actual content on the web page. The user might use the security loop holes in the website to inject the content that they wish to the target website. When an application does not properly handle user supplied data, an attacker can supply content to a web application, typically via a parameter value, that is reflected back to the user.

Missing Secure and "Http Only" Flag from cookie   

This is an additional flag included in a Set-Cookie HTTP response header. If supported by the browser, using the HttpOnly flag when generating a cookie helps mitigate the risk of client side script accessing the protected cookie. If a browser that supports HttpOnly detects a cookie containing the HttpOnly flag, and client side script code attempts to read the cookie, the browser returns an empty string as the result. This causes the attack to fail by preventing the malicious (usually XSS) code from sending the data to an attacker's web

[Set-Cookie: <name>=<value>[; <Max-Age>=<age>] [; expires=<date>][; domain=<domain_name>] [; path=<some_path>][; secure][; HttpOnly]]

Cookie Path Set to Root  

Many of the browsers don’t allow to set cookie at root level. set the cookie path attribute to application defined folder

SameSite Cookie Attribute Not Set   

The 'SameSite' attribute tells browsers when and how to fire cookies in first- or third-party situations. This attribute is used by a variety of browsers to identify whether or not to allow a cookie to be accessed.

Improper Error/Exception Handling Vulnerability   

Improper error handling arises when security mechanisms fail to deny access until it is specifically granted. This may occur as a result of a mismatch in policy and coding practice. It may also result from code that lacks appropriate error handling logic. For example, a system may grant access until it's denied (deny all, then allow individually).

Improper Session Management    The issue is because session tokens are not handled in proper way. While some of it might be intentional, enough care should be taken to add some kind of validation for the user. Because of the way mobile applications are used, many developers allow long or non-expiring user sessions, or use session tokens that are too predictable.

Session Timeout is not set Properly   

As a standard process, application should invalidate a session after a predefined idle time has passed (a timeout) and provide users the means to invalidate their own sessions, (logout). These simple measures help to keep the lifespan of a session ID as short as possible.  To protect against Insufficient Session Expiration attacks, the logout function should be easily visible to the user, explicitly invalidate a user’s session, and disallow reuse of the session token.

Missing Security Headers   

Security HTTP headers are a fundamental part of website security. Once implemented, they protect against the types of attacks that a site is most likely to come across. These headers protect against XSS, code injection, clickjacking, etc.

There are third party services to enable the security scan. A through scan and identify and fix all the major critical items should be a must included item in any AEM project delivery.

Tuesday 5 November 2019

Composum with AEM

What is Composum?

Composum is an Open-Source project based on Apache Sling, which is a set of useful tools and a
framework to work with Apache Sling framework easily

Modules of Composum: Nodes, Pages, Assets, Platform

Composum Platform:
This is the central services to set up an Apache Sling based application platform

Composum Pages:
Pages provides the content management feature of the Composum platform

Composum Assets:
This is the image asset management feature of the Composum

Composum Nodes:
Resource/JCR development tool with core API for all Composum modules.

Use case with AEM: Composum helps to create a diagram representation of the OSGI service dependencies

Demo video can be found here - Composum with AEM