Skip to main content

Being a Firewall Engineer : An Operational Approach: A Comprehensive guide on firewall operations and best practices

Glad to announce the second edition of my book, Being a Firewall Engineer : An Operational Approach: A Comprehensive guide on firewall operations and best practices is now live on Amazon . The firewall technologies and the landscape is rapidly changing and therefore i needed to make multiple changes from the first edition.This is not a configuration guide and is suitable for beginners and junior engineers. The following topics are briefly covered in the second edition of this book. Various Job roles related to Firewalls. What makes you a firewall expert? Know the major firewall vendors and their models. Firewall ranking and benchmarks. Understand the packet flow or order of operation. Understand the different types of firewalls. Daily tasks of a firewall administrator Guidelines on firewall hardening and compliance. Understand Change Management process. Illustration on How to make a firewall change (incorporating Change management process) with a real world example.

DFIR KAPE : Evidence Collection

This post is an introduction on Kroll Artifact Parser and Extractor (KAPE) and how it can be used to collect evidences. 

When a malware is executed, it usually leaves some evidences about its execution. These are important details used for investigation and forensics. KAPE is an evidence acquisition tool and can be used to acquire all the evidence of execution from the victim's system.

The KAPE directory has two main folders. Modules and Targets. 

KAPE DFIR tool

KAPE Modules.

The modules directory contains different directories and inside it, we can see the module files with .mkape file extension. That is how the modules are identified. These modules defines how to process the collected evidence, can use scripts or the installed applications. To view all the available modules, browse the Kape directory or execute from the command line.

KAPE modules

Run kape.exe --mlist . to list all the modules. It is also mentioned in the .mkape file on how to format and store the evidence output. 


The bin folder contains the executable that the modules use in-order to process the collected evidence. When a module is selected to run, it will check the bin folder or the path mentioned in the module. For better understanding, see Example 4.below.

KAPE Targets.

The target defines what information needs to be collected as the evidence. Targets are found in the target folder with .tkape file extension.
To list all the available targets,

Run kape.exe --tlist .

KAPE target list

KAPE target list
As an example, take a look at the target "WindowsFirewall". This is used to define what data has to collect. It defines the log path for the windows firewall logs.

So, Modules define How to process the collected data and Targets defines What data needs to be collected. It is not mandatory to use modules if you don't wanted to process the collected evidences.

Now, let's talk about the KAPE command syntax.

Frequently used options are --tsource,-- tdest, and --target. However, there are a lot of granular options, that you can see just my running kape.exe from the CLI.

KAPE evidence collection

--tsource :  Target source drive to copy files from (C, D:, or F:\ for example)
--target : What configuration to use. Or what to collect? In the following example, i have used the target "RegistryHives", It is possible to specify multiple targets using a comma.
--tdest :  Destination directory to copy files to. Where to save the collected data.

Example 1:

In this first example, I want to collect the Filesystem evidences from a victim machine. 

--tsource , is the victim machine's C: Drive
--target , is the Filesystem evidence.
--tdest , is the thumb-drive i have attached to the victim's machines for collecting evidence.

KAPE example

Once the command is executed, the results will be stored in the attached thumb drive. This evidence can be used for further investigation and analysis.

KAPE example

Example 2: 

In this second example, my objective is to collect the information related to the Filesystem and Eventlogs.

KAPE example

kape.exe --tsource C: --tdest Z:\Ex1\ --tflush --target FileSystem,EventLogs --vss --vhdx Ex1

--tflush : Delete all files in 'tdest' prior to collection.
--vss : Malware could be hidden in shadow copies. This option Process all Volume Shadow Copies .
--vhdx : option to create a VHDX file. The collected evidences will be stored in a virtual volume.

KAPE vhdx tflush
After the command execution, we can see the disk image in the attached thumb-drive.
Example 4: 
In this example, i will use the KAPE GUI to collect an process the evidences. The objective is as follows.
Collect the event logs and parse the security logs. To achieve this, lets take a look at the module information.

Make sure the executable is present in the bin directory. If it's not, the module will not run. So in this example, the FullEventLogView_Security will use the FullEventLogView.exe executable to process the information from the collected evidence.Which mean first the evidence is collected as per the Target configurations and then the module will run.

KAPE module run
The target destination and the module source should be the same, because the module uses the collected evidences. Here i use the target Eventlogs and module FullEventLogView_Security. We can also notice that the command is automatically generated based on the settings.

Click on Execute to run. The command is executed and status will be shown in a terminal window. Notice how the module is using the executable to parse the evidence.

KAPE module run

Once the operation is completed, we can see the final processed data in the specified format.

KAPE module run







Popular posts from this blog

Cisco ASA: Disable SSLv3 and configure TLSv1.2.

For configuring TLS v1.2, the ASA should run software version 9.3(2) or later. In earlier versions of ASA, TLS 1.2 is not supported.If you are running the old version, it's time to upgrade. But before that i will show you the config prior to the change. I am running ASA version 9.6.1 Now ,set the server-version to tlsv1.2, though ASA supports version tlsv1.1, its always better to configure the connection to more secure. Server here in the sense, the ASA will be act as the server and the client will connect to the ASA.     #ssl server-version tlsv1.2 set the client-version to tlsv1.2, if required.     #ssl client-version tlsv1.2 ssl cipher command in ASA offers 5 predefined security levels and an additional custom level.     #ssl cipher tlsv1.2 high we can see the setting of each cipher levels using #show ssl cipher command. Now set the DH group to 24, which is the strongest offered as of now in the ASA.     #ssl dh-group group24 An

RUST error: linker `link.exe` not found

While compiling Rust program in a windows environment, you may encounter the error : linker `link.exe` not found. This is because of the absence of the C++ build tools in your machine. For compiling Rust programs successfully, one of the prerequisites is the installation of the Build Tools for Visual Studio 2019.   Download the Visual Studio 2019 Build tools from the Microsoft website. After the download, while installing the Build tools, make sure that you install the required components (highlighted in Yellow) This will download around 1.2GB of required files. Once everything is successfully installed, reboot and re-run your rust program and it will compile successfully.   Read More on RUST Hello World Rust Program : Code explained RUST Cargo Package Manager Explained Data Representation in Rust.

How to Install Netmiko on Windows?

Netmiko, developed by kirk Byers is an open source python library  based on Paramiko which simplifies SSH management to network devices and is primarily used for network automation tasks. Installing Netmiko in linux is a matter o f one single command but if you need to use Netmiko in your Windows PC, follow this process. 1) Install the latest version of Python. 2) Install Anaconda, which is an opensource distribution platform that you can install in Windows and other OS's (https://www.anaconda.com/download/) 3) From the Anaconda Shell, run “ conda install paramiko ”. 4) From the Anaconda Shell, run “ pip install scp ”. 5) Now Install the Git for Windows. (https://www.git-scm.com/downloads) . Git is required for downloading and cloning all the Netmiko library files from Github. 6) From Git Bash window, Clone Netmiko using the following command git clone https://github.com/ktbyers/netmiko&#8221         7) Once the installation is completed, ch

PrintNightmare (CVE-2021-1675) PoC exploit Walkthrough

I am not an exploit developer but was interested to see how this vulnerability can be exploited. So i tried to replicate the infamous PrintNightmare vulnerability using the following PoCs ( https://github.com/cube0x0/CVE-2021-1675 ) and ( https://github.com/rapid7/metasploit-framework/pull/15385 ) However i had trouble with the new metasploit module (auxiliary/admin/dcerpc/cve_2021_1675_printnightmare) and i couldn't able to exploit the machine successfully. So i tried the second PoC from cube0x0. This one has done the magic. I just followed the guidelines with couple of tweaks. First of all, i installed the impacket (cube0x0 version) which will install the required modules and files. After that i set up a samba share with an anonymous login. This is required for hosting the dll file. I edited the smb.conf with the following settings. [global]     map to guest = Bad User     server role = standalone server     usershare allow guests = yes     idmap config * : backend = tdb     s

Google Cloud : Basic Cloud Shell commands

Google Cloud resources can be managed in multiple ways. It can be done using Cloud Console, SDK or by using Cloud Shell. A few basic Google Cloud shell commands are listed below. 1)    List the active account name gcloud auth list 2)    List the project ID gcloud config list project 3)    Create a new instance using Gcloud shell gcloud compute instances create [INSTANCE_NAME] --machine-type n1-standard-2 --zone [ZONE_NAME] Use gcloud compute machine-types list to view a list of machine types available in particular zone. If the additional parameters, such as a zone is not specified, Google Cloud will use the information from your default project. To view the default project information, use gcloud compute project-info describe 4)    SSH in to the machine gcloud compute ssh [INSTANCE_NAME] --zone [YOUR_ZONE] 5)    RDP a windows server gcloud compute instances get-serial-port-output [INSTANCE_NAME] --zone [ZONE_NAME] 6)    Command to check whether the server is ready f

What is a Gratuitous ARP? How is it used in Network attacks?

Many of us encountered the word "Gratuitous" while exploring the network topic on ARP, The Address Resolution Protocol. Before explaining Gratuitous ARP, here is a quick review on how ARP works. ARP provides IP communication within a Layer 2 broadcast domain by mapping an IP address to a MAC address.For example, Host B wants to send information to Host A but does not have the MAC address of Host A in its ARP cache. Host B shoots a broadcast message for all hosts within the broadcast domain to obtain the MAC address associated with the IP address of Host A. All hosts within the same broadcast domain receive the ARP request, and Host A responds with its MAC address. We can see the ARP entries on our computers by entering the command arp -a . So, back to the topic on what is a Gratuitous reply, here is a better explanation. Gratuitous arp is when a device will send an ARP packet that is not a response to a request. Ideally a gratuitous ARP request is an ARP request packe

Recovery Procedure: Alcatel-Lucent Omni-Switch not booting AOS: Going to Mini-boot prompt.

Problem: Switch not booting AOS; Going to Mini-boot prompt. Model: Alcatel-Lucent OS6850 [Note:The same procedure might be applicable for different models of Omni-Switches, However, for this illustration, i have used OS-6850 ] Reason: This problem may occurs due to corrupt AOS image files or misconfigured boot parameters. Hence switch cannot boot the images properly and will go to Mini-boot prompt.  Work Around: [Note: This zmodem procedure consumes a lot to time to finish the process.] 1.) Power off your OS6850 2.) When you switched it back on, stop it before the Miniboot (there is some counter counting down from 4). Press Enter to break. 3.) You will have the following prompt " => " 4.) Enter " setenv baudrate 115200 ”. Increasing baudrate helps to increase the data transfer speed using zmodem. 5.) Enter " saveenv " 6.) Enter " boot " 7.) The switch should run now in baud rate 115200 (so you have to change your clients ter

Cisco Modular Policy Framework (MPF) : A brief Introduction

Modular Policy Framework (MPF) configuration defines set of rules for applying firewall features, such as traffic inspection, QoS etc. to the traffic transiting the firewall There are 3 main components in creating a MPF. 1) Class Map Class map is used to identify the type of traffic. This can be done by creating an ACL. 2) Policy Map Policy Map specifies what action the ASA should take against the traffic identified by the Class Map. 3) Service Policy Finally Service policy specifies where to apply it. The policy is applied to an interface or Globally. Udacity has special offers worldwide to help anyone learn important, higher-paying job skills during this challenging time. Click here to get your offer and start learning now! Sample Illustration Consider the following Command lines. access-list OUTSIDE-TO-INSIDE permit tcp any any eq ftp <--- The above ACL will allow FTP traffic. This ACL can be different than the Interface ACL---> class-map FTP-CLASS-MAP     match ac

Difference between Azure management groups, Subscriptions and Resource groups

image courtesy : https://docs.microsoft.com Azure management groups help you manage your Azure subscriptions by grouping them together. If your organization has many subscriptions, you might need a way to efficiently manage access, policies, and compliance for those subscriptions. Azure management groups provide a level of scope above subscriptions. Azure subscriptions help you organize access to Azure resources and determine how resource usage is reported, billed, and paid for. Each subscription can have a different billing and payment setup, so you can have different subscriptions and plans by office, department, project, and so on. Resource groups are containers that hold related resources for an Azure solution. A resource group includes those resources that you want to manage as a group. You decide which resources belong in a resource group based on what makes the most sense for your organization. reference : https://docs.microsoft.com

Hardening your Azure cloud platform and best practices.

A quick reference on Azure Cloud platform security baseline based on CIS. Baseline security checklist for commonly used Azure services. Please fast forward towards the end of this post, if you are looking for the CIS Microsoft Azure Foundations Security Benchmark Turn on Azure Security Center - it's free - Upgrade your Azure subscription to Azure Security Center Standard. Security Center's Standard tier helps you find and fix security vulnerabilities, apply access and application controls to block malicious activity, detect threats using analytics and intelligence, and respond quickly when under attack. Adopt CIS Benchmarks - Apply them to existing tenants. Use CIS VMs for new workloads - from Azure Marketplace. Store your keys and secrets in Azure Key Vault (and not in your source code) - Key Vault is designed to support any type of secret: passwords, database credentials, API keys and, certificates. Install a web application firewall - Web application firewall (WAF) is a feat