5 Aralık 2022 Pazartesi

Slack Slash Command to block an IP on AWS

Here is an example of a Slack command that can be used to block an IP address in AWS:

This Slack command can be implemented by creating a custom Slack app and integrating it with AWS using the AWS API and the Slack API. The Slack app can be installed and configured in a Slack workspace, and the command can be used by Slack users who have the appropriate permissions and credentials to access the AWS account and manage the security groups.


To implement the Slack command, the following steps can be followed:


  1. Create a custom Slack app and configure it with a bot user and the appropriate permissions and scopes to access the Slack workspace and interact with Slack users.
  2. Create an AWS IAM user and generate an access key and secret access key to access the AWS API using the AWS CLI or the AWS SDK.
  3. Install and configure the AWS CLI and the Slack CLI on the server where the Slack app is hosted.
  4. Define the Slack command and implement the command handler function that receives the command arguments and executes the required actions.
  5. Use the AWS CLI and the Slack CLI to call the appropriate AWS API and Slack API methods to block the IP address in the specified security group.
  6. Use the Slack API to send a message to the Slack user who invoked the command, and confirm that the IP address was successfully blocked in AWS.
  7. Here is an example of the implementation of the Slack command in Python using the slack-sdk and the awscli libraries:

import os
import slack
import awscli

# Define the Slack command and the command handler function
@slack.command("block-ip", help_text="Block an IP address in AWS")
def block_ip(event, args):
    # Parse the IP address argument
    if len(args) < 1:
        return slack.error("Please specify the IP address to block")
    ip_address = args[0]

    # Set the AWS access key and secret access key
    os.environ["AWS_ACCESS_KEY_ID"] = "<AWS_ACCESS_KEY>"
    os.environ["AWS_SECRET_ACCESS_KEY"] = "<AWS_SECRET_ACCESS_KEY>"

    # Call the AWS API to block the IP address in the specified security group
    result = awscli.aws("ec2", "revoke-security-group-ingress",
                        "--group-id", "<SECURITY_GROUP_ID>",
                        "--ip-permissions", "[{\"IpProtocol\": \"-1\", \"FromPort\": 0, \"ToPort\": 65535, \"IpRanges\": [{\"CidrIp\": \"%s/32\"}]}]" % ip_address)

    # Send a message to the Slack user who invoked the command
    if result.success:
        slack.send("The IP address %s was successfully blocked in AWS" % ip_address, event.channel)
    else:
        slack.send("Failed to block the IP address %s in AWS: %s" % (ip_address, result.stderr), event.channel)
In this script, the Slack command and the command handler function are defined using the @slack.command decorator from the slack-sdk library. The function receives the command. The code is demonstration purpose please do not include secrets in the source code use secret vaults or env variables.

1 Kasım 2021 Pazartesi

Akamai Datastream Logs to ELK Stack via API

 If you are using the security features of Akamai, using web security analytics(WSA) and SIEM log delivery features of Akamai  may be enough for dealing with daily security events. For CDN features there is  traffic and url traffic reports up to 90 days. You can also pull the logs via datastream api  and parse them with a Logstash json filter, output it to a Elasticsearch instance. Finally query, and present the parsed logs in a Kibana  dashboard. 

So to start you should follow the below steps. (Follow the links in the steps)

  • Enable Datastream with the desired fields from Luna Control Center
  • Create api privileged user
  • Test your user with the credential you have created in the above step with an api client like Postman
  • Follow the instructions to become familiar with the datastream api 
  • Now use this python code to fetch logs from a datastream and write the logs to a file that will be parsed via Logstash in the below steps. 
  • Schedule/Cronjob the python script to run in every 5 minutes.
  • Follow the instructions here to install ELK stack but do not configure logstash conf file yet. 
  • Use this logstash config file to parse the datastream pulled json files. 
  • Start all ELK services logstash, elasticsearch, kibana
  • Create Kibana index patttern 
  • Verify that logs are parsed properly and can be seen on Kibana Discovery application
  • Using the Lens app in Kibana create your visualizations to build up a dashboard. 

4 Nisan 2020 Cumartesi

How to automate enumerating base64 encoded parameter to exploit IDOR using python

In this article i want to show you how easy to automate an exploitation of an IDOR vulnerability using Python requests framework, where the IDOR parameter is base64 encoded.

If the parameter is not encoded with base64 you can use Burp intruder tab to create payloads and test the IDOR vulnerability.

First we should find the IDOR parameter to enumerate. Assume that you have the following url that you can download a report.

http://www.example.com/Download.ashx?RprtTkn=MTExMjIyfGZpeHBhcmFtZXRlcnxhbm90aGVyX2ZpeGVkX3BhcmFtZXRlcnw==
Look at the token carefully and try to base64 decode it. Keep in mind wherever and whenever you see such a string first try to base64 decode. You can use Burp decoder  tab or any other online tool to decode . After decoding the base64 encoded string we get the following decoded string
111222|fixparameter|another_fixed_parameter|
We are not interested with the fixed parameters So let's focus on the first bold part of the string where we can enumerate and find out whether it is exploitable or not.

To do this ;

  1. We should start a loop to change the bold part of the string which will be our IDOR enumeration value
  2. Then construct and base64 encode  
  3. Then send the request
Before starting to write our python code you can easily copy the authenticated request as cURL from chrome and convert it to python requests code using an online service . Because you will need to add authentication cookie and other headers before you made a request to the url. 

Combining all of the above explanations we have the following python code to enumerate the parameter.  








22 Aralık 2019 Pazar

Solving Palo Alto User-IP Mapping issue while connecting via Pulse Secure VPN

     In Palo Alto firewalls you can create username based rules. But TCP connections do not rely on usernames, they are based on source ip address, destination ip address, source port, destination port etc. So there should be mapping that will tell firewall which ip is mapped to the username. Palo Alto has various methods to collect and populate user-ip mappings table.
   
     In a Windows environment firewall admins used to integrate User-ID agent with Active Directory to listen logon events. So when a user logins to his/her PC in a domain, user-ip mapping is created from the logon event that is generated on the DC.

    After this brief introduction about user-ip mapping lets come to the issue, If  two users get same ip in a sequence.

   When users get connected to the corporate network via Pulse Secure VPN they are assigned an ip from the pool of a DHCP server. After this assignment, Palo Alto user id agent creates the user-ip mapping. When that specific user disconnected from the VPN, Pulse Secure sends DHCP release and the IP address sent back to the available ip pool. But the user-ip mapping is not cleared on the user-id agent side. So when another user gets connected and gets the same ip, all rules will be also valid for this user. But this is a really serious security issue.

   To solve this issue you can configure user-id agent as a syslog server and configure Pulse Secure VPN to forward auth events to this server. ,

   First you should define login-event regex to create user-ip mapping and logout regex  to clear user-ip mapping.





     Then you should add Pulse Secure VPN ip as a syslog sender and add above event filters to the profile.



    After these settings user-ip mappings will updated as expected. And no wrong user-ip mapping will occur.



21 Haziran 2018 Perşembe

Zimbra Visual Log Analysis with ELK Stack

For log analysis ELK(ElasticSearch-LogStash-Kibana) stack  is a powerful tool for Zimbra Mail Server logs, you can search logs and easliy create visually appealing graphics with Kibana interface.

In this post we will analyze the logs to find out which ip addresses abusing logins or brute forcing to Zimbra mail server.

So we first assume that if a single ip interacts with at least 5 different accounts we will count it as malicious usage. You should baseline your system accordingly otherwise there will be false positive decisions.

Now let's create the pie chart that will tell us these IP addresses visually



Then choose the following index



Now we will see whole number of logs to divide pie click Split Slices
Then
  1. Choose Terms as for the Aggregation 
  2. Choose src_ip for the Field
  3. Write top number of ip addresses you want to see in the Size section
Now you should see a pie chart as below.



Now we should add sub-bucket to see how many accounts these ip addresses interact.
So click Add sub-buckets, click  Split-Slices and configure the sub-bucket as below



Now you should see the following pie chart where the inner slices shows the source ip addresses and outer slices shows usernames that individual ip addresses interact.


Now lets describe what the pie chart tells us.

If you see an  inner slice sweeping one outer slice in 1 day or 1 hour period that is we can safely assume that this is not a malicious ip address.

But if you see an inner slice sweeping more than 5 outer slice than we can conclude that there is a malicious activity either brute force or logged in with multiple accounts from one ip addresses.




So to find out a brute-force we should add a filter with the string "invalid credentials".

19 Nisan 2018 Perşembe

Security Auditing with InSpec

InSpec is a tool from CHEF. With InSpec you can ,

  • Audit Policies
  • Check security requirements
  • Conduct compliance checks

InSpec can be installed on Linux, Mac or Windows. InSpec rules are written in ruby files.

I will give you some examples from the github repo about this amazing tool.



describe package('telnetd') do
  it { should_not be_installed }
end

describe inetd_conf do
  its("telnet") { should eq nil }
end


This rule will check the system against the installation of telnet and disallow this insecure service.

To run inspec save the above code snippet to a test.rb fie and in the command prompt run the following command to conduct the test.


inspec exec test.rb
you can also test this requirements against to remote systems.

on your linux servers using ssh ,

inspec exec test.rb -t ssh://user@hostname

or on windows through WinRM

inspec exec test.rb -t winrm://Administrator@windowshost --password 'your-password'

if you are familiar with CHEF compliance check, you can also make compliance check with the following syntax


  inspec compliance SUBCOMMAND ...   # Chef Compliance commands



For example this code uses the sshd_config resource to ensure that only enterprise-compliant ciphers are used for SSH servers.

describe sshd_config do
  its('Ciphers') { should cmp('chacha20-poly1305@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr') }
end

You can see detailed tutorials in the following link 





13 Nisan 2018 Cuma

Moodle Quiz Activity with 500 Concurrent User

We have experienced a Quiz Activity with 500 concurrent user and i want share this experience with you. Because these insights are really valuable for system administrators, that i could not find any suggestion before this Quiz Activity.  Let me write the details about the system and the quiz activity.
We have installed moodle on a virtual machine VMWare with the following configuration.

We set the vm cpu to 16 shared vcpu,  and 24GB shared RAM
Running Centos 7.0 Minimal with php5.X, apache and mariadb with 700 max_connection setting.
Quiz activity with 20 Question and 30 Minute timespan and 17 Minute attempt time limit. 2 Question per page. Question order and choices were shuffled. Auto submit open attempt setting was on.




CPU and RAM usage was crucial for us. We see maximum 12 GB ram usage which is roughly 1 GB per 50 User
We see %95 CPU usage when the quiz started.

Before quiz started i stopped cron jobs and automated course backups.


İzleyiciler