Welcome! This is my personal blog about Web technologies, software development, open source and other related topics
The ideas and opinions expressed here are solely mine and don't represent those of others, either individuals or companies.The code snippets or references to software products or analogous are to be used without any warranty of any kind. If you enjoy the content, feel free to share it and re-use it as long as you provide a link to the original post.
Admin User who created the Marketplace can delete and Transfer the Ownership of the Account. Where the ownership is transfered to the users cannot delete the Marketplace.
To delete the Marketplace you should have first created a Marketplace and login to the portal as Admin User having Full Access.
To setup restart policy to the container, use following command-
docker run --restart=<<policy option>> <<container>>
Following are the options for the container restart-
no (default)
on-failure
always
unless-specified
Following is the matrix for the restart policies-
* – this will start when the Docker daemon is started
Above is applicable if the container starts successfully
Live Restore
If you want to keep container running if the Docker daemon crashes or stops use the live restore option. This reeduces the container downtime due to daemon crashes or planned outages or upgrades.
Update the /etc/docker/daemon.json in Ubuntu system and add option live-restore:true
Follow below steps to configure Sitecore Ordercloud Headstart middlewarewithout having to provision Azure online resources. i.e. setup offline Azure resources
Install Azurite- Pre-requisite is to install node js
npm install -g azurite
Run Azurite- Navigate to the folder where the supporting files should be deployed e.g.: c:\Azurite
azurite start
OR - if you want specfic folder and in debug mode
azurite --silent --location c:\azurite --debug c:\azurite\debug.log
Blob service should listen to http://127.0.0.1:10000
Step 2 – Install Microsoft Azure Storage Explorer
Install Azure storage explorer. Download from here
Step 3 – Configure Blob Containers
Connect to Local Azure Storage
Click on the connect to open the connection dialog box
Select the Local storage emulator
Fill in the required details
Display name
Account name – this wil be used to connect to the blob storage
Blobs port- port used to connect to the vlob storage
Queues and Tables port- port used to connect queues and tables respectively
These are the connection information (you might have to note this)-
Local account is created-
Storage Explorer is now able to conect to Local Storage Emulator i.e. Azurite
Create a Blob Container – ngx-translate and create new Virtual Directory i18n
Blob Container and folder name can be any other name. You need to configure this correctly in UI config.See this in later steps
Upload the translation file
Ideally Container, Virtual Directory and trnslation file should be created by Headstart Api. I couldn’t make it work. Hence alternative this I have attached en.json file here.
Once uploaded you should be able to see the uploaded file-
Set Public Access Level to Blob Container
Select the Public read access for containers and blobs
Configure CORS settings
CORS settings are required for accessing the en.json file to access from the local blob storage
Error – while running the HeadStart.Api from the Azure Api Configuration when Cosmos is not setupin ServiceCollectionExtensions.cs
When writing this blog there is a issue whilst connecting the Cosmos service is the configuration for the same is set.
Change the code to following in ServiceCollectionExtensions.cs file-
Change the highlighted line as the empty strings are not gracefully handled.
Resolution –
==> ServiceCollectionExtensions.cs
if(string.IsNullOrEmpty(config.DatabaseName)||
string.IsNullOrEmpty(config.EndpointUri) ||
string.IsNullOrEmpty(config.PrimaryKey))
{
// allow server to be started up without these settings
// in case they're just trying to seed their environment
// in the future we'll remove this in favor of centralized seeding
// capability
return services;
}
Error – while running the HeadStart.Api from the Azure Api Configuration when Cosmos is not setupin CosmosExtensions.cs
When writing this blog there is a issue whilst connecting the Cosmos service is the configuration for the same is set.
Change the code to following in CosmosExtensions.cs file-
Change the highlighted line as the empty strings are not gracefully handled.
Resolution- Crate a cosmos endpoint or change the highlighted line of code to-
==> CosmosExtensions.cs
if( string.IsNullOrEmpty(endpointUrl) ||
string.IsNullOrEmpty(primaryKey) ||
string.IsNullOrEmpty(databaseName))
{
// allow server to be started up without these settings
// in case they're just trying to seed their environment
// in the future we'll remove this in favor of centralized seeding
//capability
return services;
}
Components of the Docker Engine – Docker Daemon, Rest API and Docker Cli
Component that manages Images, Containers, Volumes and Network – Docker Daemon
Component that manages containers in Docker Engine – LibContainer
Container can run with Docker – Yes
Component keeps alive container even if Docker Daemon is not working – Containerd-Shim
Docker engine objects- Images, Container, Volume and Network
In Container data is writable but not persistable – Yes
Dcoker looks for images in docker hub by default- Yes
Readonly component in Docker engine – Docker Images
Default directory where Docker data is stored (Ubuntu) – /var/lib/docker
Directory where the Docker config is stored(Ubuntu)- /etc/docker
OCI stands for – Open Container Initiative
OCI specification – runtime-spec and image-spec
View version of Docker engine – docker version
Stop the Docker service – systemctl stop docker or/and systemctl stop docker.socket
Start the Docker service – systemctl start docker.socket or/and systemctl start docker
Check Status of Docker service – systemctl status docker
Debug docker whilst starting the service – dockerd –debug
Where is the Daemon file located (Ubuntu) – /etc/docker/daemon.json
Where is the daemon socket located (Ubuntu) – /var/run/docker.sock
Port to connect the docker externaly with encrypted trafic – 2376
Port to connect the docker externaly with unencrypted trafic – 2375
Start the docker daemon manually – dockerd
Default docker daemon interface – Unix Socket
Default network driver – bridge
Stop Command signals running container on STOP command – SIGTERM followed by SIGKILL
Restart policies – no, on-failure, always and unless-stopped
Reduce container downtime due to daemon failure or restart- Enable Live Restore
Docker Images FAQ’s
Default Docker Image Registry – Docker Hub
Various Image Registry –
Docker Trusted Registry
Google Container Registry
Amazon Container Registry
Azure Container Registry
Types of Images in Docker Hub
Official Images
Verified Images
User Images
Base vs Parent Image –
Base Image are creatged from scratch, which means its empty. You cannot create a scratch image as it is always to be used. Any other images created from Base Image but used as parent to custom images are Parent Image. e.g. Ubuntu which is made from debian image. Here debian image is a Parent Image
Docker Swarm
What is the maximum and recommended number of mananger a swarm cluste can have? There is no max limit but recommended is 7 managers in swarm cluster
Docker uses Layered Architecture. When using Docker files it creates a new layer in the image which adds additional space to the image based on the instructions for that layer.
When a Docker build command is run it proceeds from the first instruction in Docker file to the last while caching each stage so as if the build fails next time build uses cache until it ran succesully and invalidated the stage that failed and the following stage. Layers repurpose the previous layers and don’t have to build all of them again.
In below example Docker file has 6 stages. Each stage will be cached when build command is ran.
Suppose a build fails at Stage 3 due to some reason or new package has to be added the Docker will invalidate the Stage 3 and the following stages
Next time when a issue is rectified the build command will repurpose the previuos layers and build the failed stages
But in this case the repository will not be update, so how to resolve or update the repository with the packages-
Cache Busting
In this case we can to combine the instructions so the repository is updated along with packages as below
Merging Stage 2 and Stage 3 from the previous docker file in to single instruction will ensure the repository is first udpated and pakages are installed
Merging these stages is called as Cache Busting
Version Pinning
You can also explicity mention the version of package to be installed
In stage 2 docker file is instrcuting to install python3-pip 21.3.1 version
Best Practice-
Instructions which are most frequently modified should be at the bottom of the file and the instructions which are least modified should be at the top of the docker file
You will be shown the Dashboard and an option to create New Sitecore OrderCloud Marketplace
Step 1 – Select Region
While writing this blog there are 4 Regions avaialble for you to select. As per the document you should create Marketplace in Us-West region by default so as to seed using headstart. If you want to use regions other than Us-West you might have to request this to OrderCloud team.
Select region Us-West from the region option.
Step 2 – Autoselected Environment
By default the Sandbox environment is selected for you to try the OrderCloud. If you want to access Staging or Production contact OrderCloud team.
Step 3 – Provide Mareketplace ID [optional]
Provide the Marketplace Id if you want to have your own name. Id’s are Writable i.e. if you chose your ID OrderCloud will generate the same for you or you may let the OrderCloud auto generate the ID.
Chose the Marketplace ID
Step 4 – Provide Marketplace Name
Provide Marketplace name or description here. Click on Create Marketplace.
A new Marketplace will be created and you will be redirected to the Settings tab of the Marketplace with Instance details and other Basic info.
You should also see your newly created Marketplace in Dashboard
Now lets try creating a Marketplace with the same name in the same region i.e. Us-West
You can see OderCloud doesn’t allow to create a new Maretplace with the same name
If you also try creating a Marketplace with the same name in different region it wont allow-
Create a new Marketplace without providing the ID
A new Marketplace is created and OrderCloud gave it a Unique ID
To create a new file, use touch command followed by the name of the file-
This should create a empty file.
touch thirdfile.txt
To create multiple files using touch command-
touch thirdfile-1.txt thirdfile-2.txt
Create a file with cat command
To create a new file with cat command use redirection operator followd by file name.
This will allow to add content to the file
cat > fourthfile.txt
Cratea a file using echo command
To create a new file using echo command use redirection operator followed by file name will create empty file or add content before redirection operator to add content while creating a file.