Info Sharing Blog

Wednesday, February 5, 2020

Quick Reference : Google Cloud Resources / Components

February 05, 2020 Posted by jaacostan

Compute:

    1) Compute Engine VM
    Compute Engine is Google’s VM service. Users can choose CPUs, memory, persistent disks, and operating systems.

    2) Kubernetes
    Kubernetes Engine manages groups of virtual servers and applications that run in containers.
    Kubernetes is called an orchestration service because it distributes containers across clusters, monitors cluster health, and scales as proscribed by configurations.

    3) App Engine
    App Engine is Google’s PaaS. Developers can run their code in a language-specific sandbox when using the standard environment or in a container when using the flexible environment.
    App Engine is a server-less service, so customers do not need to specify VM configurations or manage servers.
    The App Engine standard environment runs applications in language-specific sandboxes and is not a general container management system.
    App Engine flexible environments allow you to run containers on the App Engine PaaS.

    4) Cloud Functions
    Cloud Functions is a server-less service that is designed to execute short-running code that responds to events, such as file uploads or messages being published to a message queue.
    Functions may be written in Node.js or Python.

Storage:

    1) Cloud Storage
    Object stores are used to store and access file-based resources. These objects are referenced by a unique identifier, such as a URL. Object stores do not provide block or file system services, so they are not suitable     for database storage. Cloud Storage is GCP’s object storage service.

    2) Persistent Disk

    3) Cloud Filestore
    File storage supports block-based access to files. Files are organized into directories and subdirectories. Google’s Filestore is based on the NFS.

    4) Cloud SQL

    5) Cloud Bigtable
    Google's offering for Wide-column databases

    6) Cloud Spanner
    Cloud Spanner is a global relational database that provides the advantages of relational databases with the scalability previously found only in NoSQL databases.

   7) Cloud Datastore
    NoSQL database for document databases

    8) Cloud Memorystore
    Cache and is a key-value stores

    9) Cloud Firestore

Networking:

    1) Virtual Private Cloud
    A VPC is a logical isolation of an organization’s cloud resources within a public cloud. In GCP, VPCs are global; they are not restricted to a single zone or region. All traffic between GCP services can be transmitted over the Google network without the need to send traffic over the public Internet.

    2) Cloud Load Balancing
    Load balancing is the process of distributing a workload across a group of servers. Load balancers can route workload based on network-level or application-level rules. GCP load balancers can distribute workloads globally.

    3) Cloud Armor

    4) Cloud CDN

    5) Cloud Interconnect
        a) Interconnects
        b) Peering

    6) Cloud DNS

    7) Identity Management

    8) Development Tools

Management Tools:

    1) Stackdriver
    2) Monitoring
    3) Logging
    4) Error Reporting
    5) Trace
    6) Debugger
    7) Profiler

Specialized Services:

    1) Apigee API Platform

    2) Data Analytics
        a) BigQuery
        b) Cloud Dataproc
        c) Cloud Dataflow
        d) Cloud Dataprep

    3) AI and Machine Learning   
        a) Cloud AutoML
        b) Cloud Machine Learning Enginer
        c) Cloud Natural Language Processing
        d) Cloud Vision

[This article is continuously updated over time]

Monday, January 13, 2020

New book on Security Incident handling

January 13, 2020 Posted by jaacostan ,



Security Incident Handling: A Comprehensive Guide on Incident Handling and Response

Covers,

  • Security Incident Handling Framework
  • Types of threats and it's countermeasures
  • Building an effective security incident handling policy and team
  • Prepare a Security Incident Report

This book has four major sections,
The first section gives an introduction on Security incident Handling and response frameworks. Also give a glimpse on Security forensics and Risk Management concepts.
The second section explains different kinds of security threats and attacks that can result in potential security incident. Being familiarize with the attacks are very important for identifying and categorizing a security incident.
The third section mentions the security controls and countermeasures to detect, prevent or/and to mitigate a threat. This includes the detection mechanisms, defense in depth, vulnerability management etc.
The strategy and plan for building an efficient Security Incident Handing is comprehensively explained in the final section. The six phases of a security incident handling and response are explained step by step.  

Buy from Amazon

 

 

Tuesday, December 10, 2019

Data Representation in Rust.

December 10, 2019 Posted by jaacostan ,
Computer uses a fixed number of bits to represent a piece of data, which could be a number, a character, or symbols. A n-bit storage location can represent up to 2^n different entities.
A single bit can encode either 1 or 0. If we combine two bits, it can encode 4 distinct possibilities (00,01,10,11). For example, a 3-bit memory location can hold one of these eight binary patterns: 000, 001, 010, 011, 100, 101, 110, or 111. Hence, it can represent a maximum of 8 distinct entities. It can be also used to represent numbers 0 to 7. A sequence of 8 bits (2^8) is known as a Byte. A byte can form 2^8=256 distinct entities.

Integers can be represented in 8-bit, 16-bit, 32-bit or 64-bit. while coding a program, you must choose an appropriate bit-length for your integers. Also, an integer can be represented such as unsigned and signed integers. 

Unsigned Integers: can represent zero and positive integers.
Signed Integers: can represent zero, positive and negative integers. 

An 8-bit unsigned integer has a range of 0 to 255, while an 8-bit signed integer has a range of -128 to 127 - both representing 256 distinct numbers.
This is just an introduction on the data representation. If you are having coding experience, you might already know this concept.
fn main() {
    let a:u8 = 128;
    println!("a = {}",a);
}

In Rust, while declaring a variable, we usually mention the data representation also. In this way, we instruct the code how much memory the variable will use.
Here in this example, for the variable a, I have mentioned as u8 which means unsigned 8-bit integer.

Mutable vs Immutable.
Immutable: Cannot change the Value
Mutable: Can change the value.

fn main() {
    let a:u8 = 128;
    println!("a = {}",a);
    a = 10;
    println!("a = {}",a);
}

This code will return an error as the variable is having two values.
 

If you want to assign multiple value, then we need to declare the variable using mut command, which explicitly says that the variable is mutable.
fn main() {
    let mut a:u8 = 128;
    println!("a = {}",a);
    a = 10;
    println!("a = {}",a);
}



RUST Cargo Package Manager Explained

December 10, 2019 Posted by jaacostan ,
Simple rust programs like hello world are having smaller source codes and doesn’t have much complexity and dependencies. But while coding larger and complex programs, there will be multiple dependencies and for managing that, it is wise to use Cargo. Cargo is a package manager tool that is used to perform the tasks such as building the code, downloading and building the libraries the code depends. The Cargo is usually included with the Rust installation. But if using IDEs, we need to install the required plugin to support the Cargo.

Building a Cargo project
Let’s create a new project using Cargo. In this example, I am creating a Cargo package named as ex2_cargo.


Cargo new ex2_cargo command will create the cargo package. Once the command is successfully executed, browse in to the newly created ex2_cargo directory. You can see a couple of files and a src folder inside the cargo package directory.

  


The source code will always reside inside the src folder. .gitignore is a Git directory which is autogenerated. The important configuration file of a cargo package is the Cargo.toml file. TOML stands for Tom’s Obvious, Minimal Language, which is Cargo’s configuration format. Open the Cargo.toml file in a text editor.


The first section [package] indicates that the following statements are configuring a cargo package. These sections are editable and if need to add more information, we can add it.

The following lines set the configuration information Cargo needs to compile your program. The name, version, author of the code, and the rust edition. Cargo gets the information such as name, author, email from your working environment.

The last line [dependencies], is the section to define the dependencies that is used in this project.
Also note that, while we created a new Cargo package, it also creates a sample source code “main.rs”, which is the hello world program, by default and it resides inside the src folder.

In this way Cargo organize the project. All your source code will reside inside the src folder. All other information related to the project is placed at the top-level Cargo directory.
Now let’s build and run the cargo project.


Cargo build command creates an executable file in target/debug/ex2_cargo.exe. Once the command is successfully executed, you may browse in to the debug folder.
You can see a new folder named target is created. The executable file will be created inside this, under the debug folder.
Run the ex2.cargo.exe and you can see the output successfully. Also note Cargo.lock file keeps track of the exact versions of dependencies in the project and it updates automatically.
If you don’t want to create the executable and just want to compile the code, then you may use cargo check command. While writing larger code, you can run cargo check command to continuously monitor the code and the compilation success. This is the fastest way to check the code health.
Alternatively, you can use the cargo run command to the output. This command will compile and run the code in a single shot.

Hello World Rust Program : Code explained

December 10, 2019 Posted by jaacostan ,
As always, let’s start with the prominent Hello World program as the first exercise.
Create a source file with the rust file extension (.rs)
Enter the following code in the file and save it.

fn main() {
    println!("Hello World!");
}


Compile the source file from your terminal window, here in this illustration I am using windows command prompt. Then run the successfully compiled executable file.

Analysing the Code
fn main() {
    println!("Hello World!");
}

The first line defines a function in Rust. The main function is always the first code that runs in every Rust program. Here in this Hello World example, the main function declares that it has no parameters and returns nothing.  Inside the main function, we have some output to show. Note that, like python, Rust style is to indent with four spaces.

println! calls a Rust macro. If the exclamation mark (!) is not used, Rust will consider it as a function. Here we need to print the test on the screen and hence we are calling the macro.
If you don’t use an exclamation mark (!), your program may throw the following error.

 
Now the "Hello, world!" string. It passes the string as an argument to println! and the string is printed to the screen. Note that the line with a semicolon (;), which indicates that the expression is over. Most lines of Rust code end with a semicolon.

Once the source code is written, we need to compile the code using the Rust compiler by entering the rustc command followed by the file name. After compiling successfully, Rust outputs a binary executable. In Windows environment, this will create an executable file with .exe extension and in Linux platform, the executable doesn’t have any extension.

Tuesday, December 3, 2019

RUST error: linker `link.exe` not found

December 03, 2019 Posted by jaacostan ,
While compiling Rust program in a windows environment, you may encounter the error : linker `link.exe` not found. This is because of the absence of the C++ build tools in your machine. For compiling Rust programs successfully, one of the prerequisites is the installation of the Build Tools for Visual Studio 2019.


After the download, while installing the Build tools, make sure that you install the required components (highlighted in Yellow)
This will download around 1.2GB of required files. Once everything is successfully installed, reboot and re-run your rust program and it will compile successfully.

Saturday, July 6, 2019

1 Minute Reference : Google Cloud App-Engine

July 06, 2019 Posted by jaacostan

GCP App Engine consist of services, versions, and instances. Services usually provide a single function. Versions means different versions of code running in the App Engine environment. Instances are managed instances running the specific service.
How to Deploy?
Deploy App Engine using gcloud app deploy command. Also includes configuring the App Engine environment using the app.yaml file. Keep in mind that a project can have only one App Engine app at a time.
How to Scale?
There are three scaling options. auto-scaling, basic scaling, and manual scaling. Only auto-scaling and basic scaling are dynamic. Manual scaling creates resident instances. Auto-scaling allows for more configuration options than basic scaling.
How to Split the traffic?
This can be done using gcloud app services set-traffic command. Use ––splits parameter, to specify the percent of traffic to route to each version.
How to Migrate the traffic?
This can be achieved from the Versions page of the App Engine console or using the ––migrate parameter with the gcloud app services set-traffic command.

Wednesday, June 26, 2019

Kubernetes in Google Cloud : Basics Series

June 26, 2019 Posted by jaacostan
Kubernetes Engine is a container orchestration system for deploying applications to run in clusters.
Kubernetes uses pods as instances running a container.Multiple containers in a pod is also possible.

1) Set the Zone
gcloud config set compute/zone [ZONE_NAME]
2) Create a Kubernetes Cluster
gcloud container clusters create [CLUSTER-NAME]

3) After creating your cluster, need to get authentication credentials to interact with the cluster.
gcloud container clusters get-credentials [CLUSTER-NAME]
4) Deployment of Service/Applciation : kubectl run command in Cloud Shell to create a new deployment "hello-server" from the hello-app container image:
kubectl run hello-server --image=gcr.io/google-samples/hello-app:1.0 --port 8080
In Kubernetes, all containers run in pods. kubectl run command made Kubernetes to create a deployment consisting of a single pod containing the nginx container. A Kubernetes deployment keeps a given number of pods up and running even in the event of failures.
5) Expose the application to the internet.
kubectl expose deployment hello-server --type="LoadBalancer"
6) Verify the running pods
kubectl get pods
7) View the running service.
kubectl get services
8) Scale up the number of pods running the services.
kubectl scale deployment hello-server --replicas 3
Scaling up a deployment is useful when you want to increase available resources for an application

For Google Cloud : Basic Cloud Shell commands , follow here.⏬⏬

Tuesday, June 25, 2019

Google Cloud : Basic Cloud Shell commands

June 25, 2019 Posted by jaacostan
Google Cloud resources can be managed in multiple ways. It can be done using Cloud Console, SDK or by using Cloud Shell.
A few basic Google Cloud shell commands are listed below.

1)    List the active account name
gcloud auth list
2)    List the project ID
gcloud config list project
3)    Create a new instance using Gcloud shell
gcloud compute instances create [INSTANCE_NAME] --machine-type n1-standard-2 --zone [ZONE_NAME]

Use gcloud compute machine-types list to view a list of machine types available in particular zone. If the additional parameters, such as a zone is not specified, Google Cloud will use the information from your default project. To view the default project information, use gcloud compute project-info describe

4)    SSH in to the machine
gcloud compute ssh [INSTANCE_NAME] --zone [YOUR_ZONE]
5)    RDP a windows server
gcloud compute instances get-serial-port-output [INSTANCE_NAME] --zone [ZONE_NAME]
6)    Command to check whether the server is ready for an RDP connection
gcloud compute instances get-serial-port-output
7)    Create a Storage bucket
gsutil mb gs://[BUCKET_NAME]
8)    Copy a file in to the bucket
gsutil cp [FILE_NAME] gs://[BUCKET_NAME]
9)    Setting up default compute zone
gcloud config set compute/zone [ZONE_NAME]
10)    Set the default region:
gcloud config set compute/region [REGION_NAME]
11)    List the compute engine instances created: 
gcloud compute instances list
12)    Create Kubernetes Cluster
gcloud container clusters create [CLUSTER-NAME]
13)    Get authentication credentials for the cluster
gcloud container clusters get-credentials [CLUSTER-NAME]
14)    Expose the Kubernetes resource to the internet
kubectl expose deployment hello-server --type="LoadBalancer"
!----- Passing in type="LoadBalancer" creates a Compute Engine load balancer for your container-----!
15)    Inspect the service running in Kubernetes
kubectl get service [SERVICE_NAME] 
16)    Stop an Compute Instance.
gcloud compute instances stop [INSTANCE-NAME]

About Serverless Computing Offering by Google Cloud, Continue reading here  


Thursday, June 20, 2019

Serverless Computing in Google Cloud.

June 20, 2019 Posted by jaacostan
Google Cloud Platform offers two server-less computing options and they are App Engine and Cloud Functions.

App Engine is used for applications and containers that run for extended periods of time, such as a website back-end or a custom application for some specific functions/requirements.

Cloud Functions is a platform for running code in response to an event, such as uploading a file or adding a message to a message queue. This server-less option works well when you need to respond to an event by running a short process coded in a function or by calling a longer-running application that might be running on a Virtual Machine, managed cluster, or App Engine.

And what is a Managed Cluster?
A cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster.