Page 9 of 21

Process to delete Sitecore OrderCloud Marketplace

Admin User who created the Marketplace can delete and Transfer the Ownership of the Account. Where the ownership is transfered to the users cannot delete the Marketplace.

To delete the Marketplace you should have first created a Marketplace and login to the portal as Admin User having Full Access.

For creating your Marketplace see this blog.

Once you have logged in to the portal from Dashboard select the Marketplace you want to delete.

In this case lets delete the Marketplace with name “OrderCloud Blog Auto Generated ID”

Once you are in the Marketplace, you should see the option to Delete since you have FullAccess

Confirmation message is displayed. Click on Send Confirmation Code

A confirmation code is sent to your registered mail.

Once you select option to Send Confirmation Code, you should receive mail on your registered email address from noreply@four51.com.

The mail should have details of the Marketplace you are attempting ot delete along with the code

Use this code to confirm the deletion of Maretplace

And the deleted Marketplace is no more in the Dashboard-

Docker Restart Policies

To setup restart policy to the container, use following command-

docker run --restart=<<policy option>> <<container>>

Following are the options for the container restart-

  1. no (default)
  2. on-failure
  3. always
  4. unless-specified

Following is the matrix for the restart policies-

* – this will start when the Docker daemon is started

Above is applicable if the container starts successfully

Live Restore

If you want to keep container running if the Docker daemon crashes or stops use the live restore option. This reeduces the container downtime due to daemon crashes or planned outages or upgrades.

Update the /etc/docker/daemon.json in Ubuntu system and add option live-restore:true

Setup OrderCloud Headstart project using Azurite, Storage Explorer and Cosmos DB Emulator – Part 1

Follow below steps to configure Sitecore Ordercloud Headstart middleware without having to provision Azure online resources. i.e. setup offline Azure resources

Online vs Local Azure resources-

Azure Online ResourcesAzure Offline Resources
Azure App Configurationuse appSettings.json
Storage AccountAzurite Emulator and Azure Storage Explorer
Azure Cosmos DatabaseAzure Cosmos DB Emulator

Step 1 – Install and Run Azurite Emulator

Install Azurite- Pre-requisite is to install node js

npm install -g azurite

Run Azurite- Navigate to the folder where the supporting files should be deployed e.g.: c:\Azurite

azurite start

OR - if you want specfic folder and in debug mode 

azurite --silent --location c:\azurite --debug c:\azurite\debug.log

Blob service should listen to http://127.0.0.1:10000

Step 2 – Install Microsoft Azure Storage Explorer

Install Azure storage explorer. Download from here

Step 3 – Configure Blob Containers

Connect to Local Azure Storage

Click on the connect to open the connection dialog box

Select the Local storage emulator

Fill in the required details

Display name

Account name – this wil be used to connect to the blob storage

Blobs port- port used to connect to the vlob storage

Queues and Tables port- port used to connect queues and tables respectively

These are the connection information (you might have to note this)-

Local account is created-

Storage Explorer is now able to conect to Local Storage Emulator i.e. Azurite

Create a Blob Container – ngx-translate and create new Virtual Directory i18n

Blob Container and folder name can be any other name. You need to configure this correctly in UI config. See this in later steps

Upload the translation file

Ideally Container, Virtual Directory and trnslation file should be created by Headstart Api. I couldn’t make it work. Hence alternative this I have attached en.json file here.

Once uploaded you should be able to see the uploaded file-

Set Public Access Level to Blob Container

Select the Public read access for containers and blobs

Configure CORS settings

CORS settings are required for accessing the en.json file to access from the local blob storage

Click “Add” to add new CORS Settings-

Following values in –

Allowed Headers – x-ms-meta-data,x-ms-meta-target,x-ms-meta-abc

Exposed Headers – x-ms-meta-*

Check for all the steps performed above the operations in Blob Storage Explorer is successful.

Reference for CORS Rule-

https://docs.microsoft.com/en-us/rest/api/storageservices/cross-origin-resource-sharing–cors–support-for-the-azure-storage-services

Now try to access the translator file from local blob storage on – 127.0.0.1:10000/devstoreaccount1/ngx-translate/i18n/en.json

OrderCloud headstart configuration issues- resolved

OrderCloud headstart configuration issue- resolved

Error- Running the Headstart.Api with non-compatible .net framework

Resolution – Install .net core 3.1 from here

Error – while running the HeadStart.Api from the Azure Api Configuration when Cosmos is not setup in ServiceCollectionExtensions.cs

When writing this blog there is a issue whilst connecting the Cosmos service is the configuration for the same is set.

Change the code to following in ServiceCollectionExtensions.cs file-

Change the highlighted line as the empty strings are not gracefully handled.

Resolution

==> ServiceCollectionExtensions.cs

if(string.IsNullOrEmpty(config.DatabaseName)|| 
                string.IsNullOrEmpty(config.EndpointUri) || 
                string.IsNullOrEmpty(config.PrimaryKey))
{
  // allow server to be started up without these settings
  // in case they're just trying to seed their environment
  // in the future we'll remove this in favor of centralized seeding   
  // capability

     return services;
}

Error – while running the HeadStart.Api from the Azure Api Configuration when Cosmos is not setup in CosmosExtensions.cs

When writing this blog there is a issue whilst connecting the Cosmos service is the configuration for the same is set.

Change the code to following in CosmosExtensions.cs file-

Change the highlighted line as the empty strings are not gracefully handled.

Resolution- Crate a cosmos endpoint or change the highlighted line of code to-

==> CosmosExtensions.cs

if( string.IsNullOrEmpty(endpointUrl) ||     
           string.IsNullOrEmpty(primaryKey) ||  
           string.IsNullOrEmpty(databaseName))
{
   // allow server to be started up without these settings
   // in case they're just trying to seed their environment
   // in the future we'll remove this in favor of centralized seeding 
   //capability

        return services;
}

Docker FAQs

Docker Engine and Architecture FAQ’s

Components of the Docker Engine – Docker Daemon, Rest API and Docker Cli

Component that manages Images, Containers, Volumes and Network – Docker Daemon

Component that manages containers in Docker Engine – LibContainer

Container can run with Docker – Yes

Component keeps alive container even if Docker Daemon is not working – Containerd-Shim

Docker engine objects- Images, Container, Volume and Network

In Container data is writable but not persistable – Yes

Dcoker looks for images in docker hub by default- Yes

Readonly component in Docker engine – Docker Images

Default directory where Docker data is stored (Ubuntu) – /var/lib/docker

Directory where the Docker config is stored(Ubuntu)- /etc/docker

OCI stands for – Open Container Initiative

OCI specification – runtime-spec and image-spec

View version of Docker engine – docker version

Stop the Docker service – systemctl stop docker or/and systemctl stop docker.socket

Start the Docker service – systemctl start docker.socket or/and systemctl start docker

Check Status of Docker service – systemctl status docker

Debug docker whilst starting the service – dockerd –debug

Where is the Daemon file located (Ubuntu) – /etc/docker/daemon.json

Where is the daemon socket located (Ubuntu) – /var/run/docker.sock

Port to connect the docker externaly with encrypted trafic – 2376

Port to connect the docker externaly with unencrypted trafic – 2375

Start the docker daemon manually – dockerd

Default docker daemon interface – Unix Socket

Default network driver – bridge

Stop Command signals running container on STOP command – SIGTERM followed by SIGKILL

Restart policies – no, on-failure, always and unless-stopped

Reduce container downtime due to daemon failure or restart- Enable Live Restore

Docker Images FAQ’s

Default Docker Image Registry – Docker Hub

Various Image Registry –

  • Docker Trusted Registry
  • Google Container Registry
  • Amazon Container Registry
  • Azure Container Registry

Types of Images in Docker Hub

  • Official Images
  • Verified Images
  • User Images

Base vs Parent Image –

Base Image are creatged from scratch, which means its empty. You cannot create a scratch image as it is always to be used. Any other images created from Base Image but used as parent to custom images are Parent Image. e.g. Ubuntu which is made from debian image. Here debian image is a Parent Image

Docker Swarm

What is the maximum and recommended number of mananger a swarm cluste can have? There is no max limit but recommended is 7 managers in swarm cluster

Cache Busting and Version Pinning when building Docker images

docker file – Layered Architecture

Docker uses Layered Architecture. When using Docker files it creates a new layer in the image which adds additional space to the image based on the instructions for that layer.

When a Docker build command is run it proceeds from the first instruction in Docker file to the last while caching each stage so as if the build fails next time build uses cache until it ran succesully and invalidated the stage that failed and the following stage. Layers repurpose the previous layers and don’t have to build all of them again.

In below example Docker file has 6 stages. Each stage will be cached when build command is ran.

Suppose a build fails at Stage 3 due to some reason or new package has to be added the Docker will invalidate the Stage 3 and the following stages

Next time when a issue is rectified the build command will repurpose the previuos layers and build the failed stages

docker file – Layered Architecture

But in this case the repository will not be update, so how to resolve or update the repository with the packages-

Cache Busting

In this case we can to combine the instructions so the repository is updated along with packages as below

docker file – Cache Busting and Version Pinning

Merging Stage 2 and Stage 3 from the previous docker file in to single instruction will ensure the repository is first udpated and pakages are installed

Merging these stages is called as Cache Busting

Version Pinning

You can also explicity mention the version of package to be installed

In stage 2 docker file is instrcuting to install python3-pip 21.3.1 version

Best Practice-

Instructions which are most frequently modified should be at the bottom of the file and the instructions which are least modified should be at the top of the docker file

Docker storage for Ubuntu

Docker uses storage drivers to store the read-only images and writable containers

It basically has 6 layers

Read-only/Image Layers

  1. Base Image e.g. Ubuntu OS
  2. Packages/Repositories e.g. apt etc
  3. Dependencies e.g. pip etc
  4. Custom Code e.g. python code etc
  5. Enrtypoint or command i.e. excutes the program

Writable Layer

6. Container Layer

Layers of a container based on the Ubuntu image

Data and files related to images and containers are store in /var/lib/docker folder in Ubuntu

To check the storage driver used by the docker, use following command-

docker info | more

Im my case it is overlay2

You can also use this command to get the storage driver

docker info | grep "Storage Driver"

How to change the storage driver

Stop the Docker service

systemctl stop docker.socket
systemctl stop docker

Check the docker service status

service docker status

Backup the docker folder

cp -au /var/lib/dovker /var/lib/docker.bk

Change the storage driver

echo '{ "storage-driver": "aufs" }' | sudo tee /etc/docker/dameon.json
service docker start

Image credit and reference links –

https://docs.docker.com/storage/storagedriver/

https://docs.docker.com/storage/storagedriver/overlayfs-driver/

Error while starting the SOLR

Do you see this error while starting the SOLR?

Check the Environment Variable JAVA_HOME and if the path of the jre is correct.

In my case due to the recent JAVA sdk update the path was not changed in environmental variable to reflect the actual path.

So one of the issue could be the incorrect path. Change this to the actual path.

And here we have the SOLR service running

Create your first Sitecore OrderCloud Marketplace

Login to the OrderCloud portal.

https://portal.ordercloud.io/login

You will be shown the Dashboard and an option to create New Sitecore OrderCloud Marketplace

Step 1 – Select Region

While writing this blog there are 4 Regions avaialble for you to select. As per the document you should create Marketplace in Us-West region by default so as to seed using headstart. If you want to use regions other than Us-West you might have to request this to OrderCloud team.

Select region Us-West from the region option.

Step 2 – Autoselected Environment

By default the Sandbox environment is selected for you to try the OrderCloud. If you want to access Staging or Production contact OrderCloud team.

Step 3 – Provide Mareketplace ID [optional]

Provide the Marketplace Id if you want to have your own name. Id’s are Writable i.e. if you chose your ID OrderCloud will generate the same for you or you may let the OrderCloud auto generate the ID.

Chose the Marketplace ID

Step 4 – Provide Marketplace Name

Provide Marketplace name or description here. Click on Create Marketplace.

A new Marketplace will be created and you will be redirected to the Settings tab of the Marketplace with Instance details and other Basic info.

You should also see your newly created Marketplace in Dashboard

Now lets try creating a Marketplace with the same name in the same region i.e. Us-West

You can see OderCloud doesn’t allow to create a new Maretplace with the same name

If you also try creating a Marketplace with the same name in different region it wont allow-

Create a new Marketplace without providing the ID

A new Marketplace is created and OrderCloud gave it a Unique ID

Create a file in Linux

Create a file with touch command

To create a new file, use touch command followed by the name of the file-

This should create a empty file.

touch thirdfile.txt

To create multiple files using touch command-

touch thirdfile-1.txt thirdfile-2.txt

Create a file with cat command

To create a new file with cat command use redirection operator followd by file name.

This will allow to add content to the file

cat > fourthfile.txt

Cratea a file using echo command

To create a new file using echo command use redirection operator followed by file name will create empty file or add content before redirection operator to add content while creating a file.

echo "This is fifth file." > fifthfile.txt

Create a empty file with echo command

echo > sixthfile.txt