Info Sharing Blog

Tuesday, December 10, 2019

Data Representation in Rust.

December 10, 2019 Posted by jaacostan ,
Computer uses a fixed number of bits to represent a piece of data, which could be a number, a character, or symbols. A n-bit storage location can represent up to 2^n different entities.
A single bit can encode either 1 or 0. If we combine two bits, it can encode 4 distinct possibilities (00,01,10,11). For example, a 3-bit memory location can hold one of these eight binary patterns: 000, 001, 010, 011, 100, 101, 110, or 111. Hence, it can represent a maximum of 8 distinct entities. It can be also used to represent numbers 0 to 7. A sequence of 8 bits (2^8) is known as a Byte. A byte can form 2^8=256 distinct entities.

Integers can be represented in 8-bit, 16-bit, 32-bit or 64-bit. while coding a program, you must choose an appropriate bit-length for your integers. Also, an integer can be represented such as unsigned and signed integers. 

Unsigned Integers: can represent zero and positive integers.
Signed Integers: can represent zero, positive and negative integers. 

An 8-bit unsigned integer has a range of 0 to 255, while an 8-bit signed integer has a range of -128 to 127 - both representing 256 distinct numbers.
This is just an introduction on the data representation. If you are having coding experience, you might already know this concept.
fn main() {
    let a:u8 = 128;
    println!("a = {}",a);
}

In Rust, while declaring a variable, we usually mention the data representation also. In this way, we instruct the code how much memory the variable will use.
Here in this example, for the variable a, I have mentioned as u8 which means unsigned 8-bit integer.

Mutable vs Immutable.
Immutable: Cannot change the Value
Mutable: Can change the value.

fn main() {
    let a:u8 = 128;
    println!("a = {}",a);
    a = 10;
    println!("a = {}",a);
}

This code will return an error as the variable is having two values.
 

If you want to assign multiple value, then we need to declare the variable using mut command, which explicitly says that the variable is mutable.
fn main() {
    let mut a:u8 = 128;
    println!("a = {}",a);
    a = 10;
    println!("a = {}",a);
}



RUST Cargo Package Manager Explained

December 10, 2019 Posted by jaacostan ,
Simple rust programs like hello world are having smaller source codes and doesn’t have much complexity and dependencies. But while coding larger and complex programs, there will be multiple dependencies and for managing that, it is wise to use Cargo. Cargo is a package manager tool that is used to perform the tasks such as building the code, downloading and building the libraries the code depends. The Cargo is usually included with the Rust installation. But if using IDEs, we need to install the required plugin to support the Cargo.

Building a Cargo project
Let’s create a new project using Cargo. In this example, I am creating a Cargo package named as ex2_cargo.


Cargo new ex2_cargo command will create the cargo package. Once the command is successfully executed, browse in to the newly created ex2_cargo directory. You can see a couple of files and a src folder inside the cargo package directory.

  


The source code will always reside inside the src folder. .gitignore is a Git directory which is autogenerated. The important configuration file of a cargo package is the Cargo.toml file. TOML stands for Tom’s Obvious, Minimal Language, which is Cargo’s configuration format. Open the Cargo.toml file in a text editor.


The first section [package] indicates that the following statements are configuring a cargo package. These sections are editable and if need to add more information, we can add it.

The following lines set the configuration information Cargo needs to compile your program. The name, version, author of the code, and the rust edition. Cargo gets the information such as name, author, email from your working environment.

The last line [dependencies], is the section to define the dependencies that is used in this project.
Also note that, while we created a new Cargo package, it also creates a sample source code “main.rs”, which is the hello world program, by default and it resides inside the src folder.

In this way Cargo organize the project. All your source code will reside inside the src folder. All other information related to the project is placed at the top-level Cargo directory.
Now let’s build and run the cargo project.


Cargo build command creates an executable file in target/debug/ex2_cargo.exe. Once the command is successfully executed, you may browse in to the debug folder.
You can see a new folder named target is created. The executable file will be created inside this, under the debug folder.
Run the ex2.cargo.exe and you can see the output successfully. Also note Cargo.lock file keeps track of the exact versions of dependencies in the project and it updates automatically.
If you don’t want to create the executable and just want to compile the code, then you may use cargo check command. While writing larger code, you can run cargo check command to continuously monitor the code and the compilation success. This is the fastest way to check the code health.
Alternatively, you can use the cargo run command to the output. This command will compile and run the code in a single shot.

Hello World Rust Program : Code explained

December 10, 2019 Posted by jaacostan ,
As always, let’s start with the prominent Hello World program as the first exercise.
Create a source file with the rust file extension (.rs)
Enter the following code in the file and save it.

fn main() {
    println!("Hello World!");
}


Compile the source file from your terminal window, here in this illustration I am using windows command prompt. Then run the successfully compiled executable file.

Analysing the Code
fn main() {
    println!("Hello World!");
}

The first line defines a function in Rust. The main function is always the first code that runs in every Rust program. Here in this Hello World example, the main function declares that it has no parameters and returns nothing.  Inside the main function, we have some output to show. Note that, like python, Rust style is to indent with four spaces.

println! calls a Rust macro. If the exclamation mark (!) is not used, Rust will consider it as a function. Here we need to print the test on the screen and hence we are calling the macro.
If you don’t use an exclamation mark (!), your program may throw the following error.

 
Now the "Hello, world!" string. It passes the string as an argument to println! and the string is printed to the screen. Note that the line with a semicolon (;), which indicates that the expression is over. Most lines of Rust code end with a semicolon.

Once the source code is written, we need to compile the code using the Rust compiler by entering the rustc command followed by the file name. After compiling successfully, Rust outputs a binary executable. In Windows environment, this will create an executable file with .exe extension and in Linux platform, the executable doesn’t have any extension.

Tuesday, December 3, 2019

RUST error: linker `link.exe` not found

December 03, 2019 Posted by jaacostan ,
While compiling Rust program in a windows environment, you may encounter the error : linker `link.exe` not found. This is because of the absence of the C++ build tools in your machine. For compiling Rust programs successfully, one of the prerequisites is the installation of the Build Tools for Visual Studio 2019.


After the download, while installing the Build tools, make sure that you install the required components (highlighted in Yellow)
This will download around 1.2GB of required files. Once everything is successfully installed, reboot and re-run your rust program and it will compile successfully.

Saturday, July 6, 2019

1 Minute Reference : Google Cloud App-Engine

July 06, 2019 Posted by jaacostan

GCP App Engine consist of services, versions, and instances. Services usually provide a single function. Versions means different versions of code running in the App Engine environment. Instances are managed instances running the specific service.
How to Deploy?
Deploy App Engine using gcloud app deploy command. Also includes configuring the App Engine environment using the app.yaml file. Keep in mind that a project can have only one App Engine app at a time.
How to Scale?
There are three scaling options. auto-scaling, basic scaling, and manual scaling. Only auto-scaling and basic scaling are dynamic. Manual scaling creates resident instances. Auto-scaling allows for more configuration options than basic scaling.
How to Split the traffic?
This can be done using gcloud app services set-traffic command. Use ––splits parameter, to specify the percent of traffic to route to each version.
How to Migrate the traffic?
This can be achieved from the Versions page of the App Engine console or using the ––migrate parameter with the gcloud app services set-traffic command.

Wednesday, June 26, 2019

Kubernetes in Google Cloud : Basics Series

June 26, 2019 Posted by jaacostan
Kubernetes Engine is a container orchestration system for deploying applications to run in clusters.
Kubernetes uses pods as instances running a container.Multiple containers in a pod is also possible.

1) Set the Zone
gcloud config set compute/zone [ZONE_NAME]
2) Create a Kubernetes Cluster
gcloud container clusters create [CLUSTER-NAME]

3) After creating your cluster, need to get authentication credentials to interact with the cluster.
gcloud container clusters get-credentials [CLUSTER-NAME]
4) Deployment of Service/Applciation : kubectl run command in Cloud Shell to create a new deployment "hello-server" from the hello-app container image:
kubectl run hello-server --image=gcr.io/google-samples/hello-app:1.0 --port 8080
In Kubernetes, all containers run in pods. kubectl run command made Kubernetes to create a deployment consisting of a single pod containing the nginx container. A Kubernetes deployment keeps a given number of pods up and running even in the event of failures.
5) Expose the application to the internet.
kubectl expose deployment hello-server --type="LoadBalancer"
6) Verify the running pods
kubectl get pods
7) View the running service.
kubectl get services
8) Scale up the number of pods running the services.
kubectl scale deployment hello-server --replicas 3
Scaling up a deployment is useful when you want to increase available resources for an application

For Google Cloud : Basic Cloud Shell commands , follow here.⏬⏬

Tuesday, June 25, 2019

Google Cloud : Basic Cloud Shell commands

June 25, 2019 Posted by jaacostan
Google Cloud resources can be managed in multiple ways. It can be done using Cloud Console, SDK or by using Cloud Shell.
A few basic Google Cloud shell commands are listed below.

1)    List the active account name
gcloud auth list
2)    List the project ID
gcloud config list project
3)    Create a new instance using Gcloud shell
gcloud compute instances create [INSTANCE_NAME] --machine-type n1-standard-2 --zone [ZONE_NAME]

Use gcloud compute machine-types list to view a list of machine types available in particular zone. If the additional parameters, such as a zone is not specified, Google Cloud will use the information from your default project. To view the default project information, use gcloud compute project-info describe

4)    SSH in to the machine
gcloud compute ssh [INSTANCE_NAME] --zone [YOUR_ZONE]
5)    RDP a windows server
gcloud compute instances get-serial-port-output [INSTANCE_NAME] --zone [ZONE_NAME]
6)    Command to check whether the server is ready for an RDP connection
gcloud compute instances get-serial-port-output
7)    Create a Storage bucket
gsutil mb gs://[BUCKET_NAME]
8)    Copy a file in to the bucket
gsutil cp [FILE_NAME] gs://[BUCKET_NAME]
9)    Setting up default compute zone
gcloud config set compute/zone [ZONE_NAME]
10)    Set the default region:
gcloud config set compute/region [REGION_NAME]
11)    List the compute engine instances created: 
gcloud compute instances list
12)    Create Kubernetes Cluster
gcloud container clusters create [CLUSTER-NAME]
13)    Get authentication credentials for the cluster
gcloud container clusters get-credentials [CLUSTER-NAME]
14)    Expose the Kubernetes resource to the internet
kubectl expose deployment hello-server --type="LoadBalancer"
!----- Passing in type="LoadBalancer" creates a Compute Engine load balancer for your container-----!
15)    Inspect the service running in Kubernetes
kubectl get service [SERVICE_NAME] 
16)    Stop an Compute Instance.
gcloud compute instances stop [INSTANCE-NAME]

About Serverless Computing Offering by Google Cloud, Continue reading here  


Thursday, June 20, 2019

Serverless Computing in Google Cloud.

June 20, 2019 Posted by jaacostan
Google Cloud Platform offers two server-less computing options and they are App Engine and Cloud Functions.

App Engine is used for applications and containers that run for extended periods of time, such as a website back-end or a custom application for some specific functions/requirements.

Cloud Functions is a platform for running code in response to an event, such as uploading a file or adding a message to a message queue. This server-less option works well when you need to respond to an event by running a short process coded in a function or by calling a longer-running application that might be running on a Virtual Machine, managed cluster, or App Engine.

And what is a Managed Cluster?
A cluster consists of at least one cluster master machine and multiple worker machines called nodes. Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes necessary to make them part of the cluster.

Wednesday, March 13, 2019

Cisco Firepower Threat Defense NGFW made it to the Best New Firewall Books 2019

March 13, 2019 Posted by jaacostan , , , , ,

Cisco Firepower Threat Defense NGFW made it to the Best New Firewall Books 2019

BookAuthority Best New Firewall Books
I'm happy to announce that my book, "Cisco Firepower Threat Defense(FTD) NGFW: An Administrator's Handbook : A 100% practical guide on configuring and managing CiscoFTD using Cisco FMC and FDM.", made it to BookAuthority's Best New Firewall Books:
https://bookauthority.org/books/new-firewall-books?t=bku9fw&s=award&book=1726830187
BookAuthority collects and ranks the best books in the world, and it is a great honor to get this kind of recognition. Thank you for all your support!
The book is available for purchase on Amazon.

An Overview on Cisco Viptela SD-WAN

March 13, 2019 Posted by jaacostan ,
Cisco SDWAN Viptela solution comprises of the following components:
vManage Network Management System (NMS)
Image source https://sdwan-docs.cisco.com
 The vManage NMS is a centralized network management system that lets you configure and manage the entire overlay network from a simple graphical dashboard. vManage provides a single pane of glass for configuration and monitoring the SDWAN network. If vManage is offline, traffic forwarding continues on the SDWAN fabric due to separation of control and data plane in the SDWAN fabric.
vSmart Controller
The vSmart controller is the centralized brain of the SDWAN solution, controlling the flow of data traffic throughout the network. The vSmart controller works with the vBond orchestrator to authenticate SDWAN devices as they join the network and to orchestrate connectivity among the SDWAN routers.The vSmart controllers are the central orchestrators of the control plane. They have permanent communication channels with all the SDWAN devices in the network. Over the DTLS connections between the vSmart controllers and vBond orchestrators and between vSmart controllers, the devices regularly exchange their views of the network, to ensure that their route tables remain synchronized.If the vSmart controllers go offline, the SDWAN routers will continue forwarding traffic based on last known configuration state up to a configurable graceful period timer expiry.
vBond Orchestrator
The vBond orchestrator automatically orchestrates connectivity between SDWAN routers and vSmart controllers. If any SDWAN router or vSmart controller is behind a NAT, the vBond orchestrator also serves as an initial NAT-traversal orchestrator.
The vBond orchestrator automatically coordinates the initial bring-up of vSmart controllers and vEdge routers, and it facilities connectivity between vSmart controllers and vEdge routers. During the bringup processes, the vBond orchestrator authenticates and validates the devices wishing to join the overlay network. This automatic orchestration process prevents tedious and error-prone manual bringup.
vBond is required when:
• a new vEdge router (SDWAN router) joins the network
• A vEdge loses WAN connectivity completely and then regains WAN connectivity
• A vEdge reboots
If vBond is absent in any of the 3 cases above, the vEdge will not be able to join the network.
High availability for vBond is provided by FQDN where a single FQDN is mapped to several IP addresses. The SDWAN router will attempt to reach the IP addresses mapped to the FQDN in the order by which the IP addresses are specified.
ZTP Server
The ZTP server is the 1st point of contact for any new SDWAN router being provisioned into the network. It provides the SDWAN router with the FQDN of the vBond and also helps to provision the enterprise root CA chain into a new SDWAN router that is attempting to join the network.
When setting up the ZTP server, it has to be configured with a list of valid SDWAN router serial numbers, as well as the associated organization names and path to the Enterprise root CA chain (which is uploaded into the ZTP server).
By default, new SDWAN routers are configured with a factory default configuration to look for ZTP server at ztp.viptela.com which is a Cloud based ZTP offering from Cisco. When the SDWAN router initially boots up with the factory default configuration, it attempts to obtain IP/mask on its WAN interface and also DNS server IP via DHCP. In the absence of DHCP, an alternate method of auto-IP is used, where the SDWAN router looks into the physical media between its WAN interface and the upstream provider router and it configures itself with an IP/mask. When autoIP is used, the default DNS settings will be that of Google DNS.
High availability for ZTP is provided by FQDN where a single FQDN is mapped to several IP addresses. The SDWAN router will attempt to reach the IP addresses mapped to the FQDN in the order by which the IP addresses are specified.
SDWAN Routers
The SDWAN routers sit at the perimeter of a site (such as remote offices, branches, campuses, data centers) and provide connectivity among the sites.They are either hardware devices or software that runs as a virtual machine. SDWAN routers handle the transmission of data traffic.