SECFORCE          
SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing
   
HOME SECFORCE - penetration testing COMPANY SECFORCE - penetration testing SERVICES SECFORCE - penetration testing RESEARCH SECFORCE - penetration testing BLOG SECFORCE - penetration testing NEWS & EVENTS SECFORCE - penetration testing INITIATIVES SECFORCE - penetration testing CONTACT
 
SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing
    SECFORCE - penetration testing

Blog ■

 
SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing SECFORCE - penetration testing
    Home : Blog  
SECFORCE - penetration testing SECFORCE - penetration testing
   
Archive for the ‘Risk Management’ Category
 

Shortcomings of following IIS security best practices

Friday, November 16th, 2012

Having a secure web application is obviously in the best interest of the business. However, in many cases the developing is done without security in mind. Understandably time-to-market is an important factor for a business but a layered security approach will be more beneficial in the long run.

As a preliminary step it is important to secure the perimeter by implementing a firewalled DMZ zone

In short one must follow the configuration below:

Internet—[firewall]—[DMZ Zone]—[firewall]—Internal Network

The benefit of this configuration is that the web server only has limited access to the internal network.

The external firewall should only allow incoming connections on ports 80 and/or 443(https) but this should be done after the web application is ready for deployment. As a first step the external firewall should not allow any connections.

The internal firewall should allow any connection to any service needed and reject any other connections. Additionally it should only allow incoming connections to be made from the internal network and reject outgoing connections to the internal network to be made.

Another obvious benefit of such configuration is that if the web server gets compromised the internal network will be protected and the “attacker” will not be able to use the webserver to compromise hosts on the internal network.

Moreover having a firewall in place from the start it will make it easier to configure access to the web server later on.

As a general rule every exposed service should be seen as a potential threat, as individual vulnerabilities in services can lead to full compromise of the host.

Having said that the setup of a DMZ is not what this blog post is about but it needs to be stated here.

Installing the server:

In the following post we will try to emulate the scenario of an vulnerable web application and how the web server needs to be configured in order be protected against such applications. For this test case an installation of the latest Microsoft Windows 2012 Core server was done. The reason being that no extra services or additional software will be installed.

Soon after the Core installation is finished, we see the Windows Server 2012 login screen.

After successfully authentication, we are greeted with an Administration terminal, and we install IIS by issuing the script below:

C:\>CMD /C START /w PKGMGR.EXE /l:log.etw /iu:IIS-WebServerRole;IIS-WebServer;IIS-CommonHttpFeatures;IIS-StaticContent;IIS-DefaultDocument;IIS-DirectoryBrowsing;IIS-HttpErrors;IIS-HttpRedirect;IIS-ApplicationDevelopment;IIS-ASP;IIS-CGI;IIS-ISAPIExtensions;IIS-ISAPIFilter;IIS-ServerSideIncludes;IIS-HealthAndDiagnostics;IIS-HttpLogging;IIS-LoggingLibraries;IIS-RequestMonitor;IIS-HttpTracing;IIS-CustomLogging;IIS-ODBCLogging;IIS-Security;IIS-BasicAuthentication;IIS-WindowsAuthentication;IIS-DigestAuthentication;IIS-ClientCertificateMappingAuthentication;IIS-IISCertificateMappingAuthentication;IIS-URLAuthorization;IIS-RequestFiltering;IIS-IPSecurity;IIS-Performance;IIS-HttpCompressionStatic;IIS-HttpCompressionDynamic;IIS-WebServerManagementTools;IIS-ManagementScriptingTools;IIS-IIS6ManagementCompatibility;IIS-Metabase;IIS-WMICompatibility;IIS-LegacyScripts;WAS-WindowsActivationService;WAS-ProcessModel;IIS-ASPNET;IIS-NetFxExtensibility;WAS-NetFxEnvironment;WAS-ConfigurationAPI;IIS-ManagementService;MicrosoftWindowsPowerShell;NetFx2-ServerCore;NetFx2-ServerCore-WOW64

The Initial setup was with .NET and without FTP and WebDAV. In retrospect FTP was needed to upload content and it was installed later on. I must note that the PKGMGR is almost apt-get awesome.

After everything is installed we start PowerShell to manage the server more effectively.

In PowerShell we can enable the IIS features that we want eg.:

$IISFeatures = @(“Web-Asp-Net45″, “Web-Net-Ext”, “Web-ISAPI-Ext”, “Web-ISAPI-Filter”, “Web-Filtering”, “Web-IP-Security”)

Add-WindowsFeature -Name $IISfeatures -logPath “$Env:ComputerName.log” –Source \\Server\Share\sources

Soon after the web server is ready and serving …

However default setup is not what we want. Let’s follow best practices for IIS…

As a general rule of thumb default installations are not considered secure or robust in most software. This means that further steps are needed to secure the web server effectively. A search for “IIS best practice standards” gives us an idea of what needs to be done, as summarized below:

  1. Stop Default Web Site
  2. Stop Default application pool
  3. Each site should use its own associated Application Pool
  4. Each site should have Anonymous Authentication configured to use the AppPoolIdentity
  5. Web root directory should be on a separate disk
  6. Move the log files to the separate disk

1. Stopping default website:

In powershell:

load the WebAdministration module

  • PS:\> ipmo WebAdministration

Stop the Default Web Site from Starting on startup

  • PS:\> Set-ItemProperty ‘IIS:\Sites\Default Web Site’ ServerAutoStart False

Stop the Default Web Site

  • PS:\> Stop-WebSite ‘Default Web Site’

*Optionally: remove the Default Web Site

  • PS:\> Remove-WebApplication ‘Default Web Site’

2. Stopping Default application pool:

  • PS:\> Stop-WebAppPool DefaultAppPool

3. Each site should use its own associated Application Pool:

Create new website & changed the default web root

  • PS:\> New-Item IIS:\Sites\Demo -bindings @{protocol=’http’;bindingInformation=’:80:*’} -PhysicalPath F:\wwwroot\Demo

4. Each site should have Anonymous Authentication configured to use the AppPoolIdentity

  • PS:\> set-webconfigurationproperty /system.webServer/security/authentication/anonymousAuthentication -name userName -value “”
*At this point I must note that using PowerShell was becoming harder and time consuming. So I started IIS remote management to check the configuration more effecively
C:\> net start wmsvc

5. Fix permissions:

Root folder is at f:\wwwroot

Allow inheritance of read permissions in subfolders and files inside this directory

  • F:\>ICACLS <path_to_root> /INHERITANCE:R

Remove users from beeing able to access this directory (Only admins should have full access to the web root folder)

  • F:\>ICACLS <path_to_root> /remove Users

Allow read access to the Application Pool on the Web page folder (f:\wwwroot\Demo)

  • F:\>ICACLS <path_to_site> /grant “IIS AppPool\<app_pool_name>”:(OI)(CI)R
* Another typical case installation scenario would be to give full access to the Application Pool, but this is not suggested:
  • C:\> icacls <path_to_site> /grant “IIS APPPOOL\<app_pool_name>”(CI)(OI)(M)

6. Finally, move the log files to the separate disk

  • PS:\>Set-ItemProperty IIS:\Sites\Demo -name logfile.directory -value F:\weblogs

This concludes the “following best practices” part of the post. Now it is time to test the configuration. I tend to find that exploiting (as I would normally do) is the most effective way of testing. This process involves identifying the issues and then modifying the configuration to combat those issues.

Let’s exploit us !?!

As a first ster an asp web shell was uploadedl. Obviously this is not something to have on your website but we are trying to emulate a vulnerable web application or a web application with vulnerabilities that could allow a web shell to be uploaded.

The web shell allows us to execute commands. This is not something unexpected major after all it’s a web shell. The first issue identified was that we could read other parts of the file system. As expected (due to permissions above) we cannot write to any part of the filesystem or to the websites folder.

Apparently, it is possible to make a new folder at the disk root directory (eg. f:\temp) that gave full permissions to the Application Pool. Following that it was possible to upload a meterpreter exploit and execute it, to get an interactive shell.

The reason behind this was that the default permissions in the hard disk root gave full access to any User. A very simple mistake but had devastating affects for the web server. Moreover changing the permissions of the hard disk root directory was not suggested anywhere in the standards I was following. Additionally, permissions on the %TEMP% folder should also be reviewed as typically this folder can also be accessed by any user.

Lastly I must add that the exploit was running with restricted user permissions. There are a number of techniques for escalating our privileges, but as Windows Server 2012 is new none of the commonly used ones was successful, at least without rebooting the server. In any case the server is considered exploited.

Identifying & fixing the problems:

Problem #1:

AppPool was not restricted inside the wwwroot\Demo folder and had access to other parts of the file system.

To remove user permissions in the root directories.

  • C:\> ICACLS <path_to_drive> /remove Users
  • C:\> ICACLS <path_to_drive> /remove Everyone
* For both F: and C: drives

Problem #2

Executing the exploit.

First, to make it more realistic, lets assume the applications has a legitimate upload functionality it is therefore possible to upload a files to the web server. For this an upload folder with read and write permissions was added.

Although we are able to upload the exploit again, the Application Pool had no execution privileges in that folder so it was not possible to run it.

Problem #3

Although we cannot run an exploit, it is possible to upload a web shell and access it through the web server. This could be possible by abusing the upload functionality of any legitimate web application. To combat this we must instruct the server not to run ASP pages/files from within our upload folder.

To remove the functionality:

make a web.config file with the following content:

<?xml version=”1.0″ encoding=”UTF-8″?>

<configuration>

<system.webServer>

<handlers>

<clear />

</handlers>

</system.webServer>

</configuration>

This instructs the server to clear all the file handlers and to not serve any contents. For example the .asp files will not be handled by the ASP engine.

As we can see below even though the webshell is inside the upload folder when trying to access it we receive a 404 file not found error.

Additionally to prevent overwriting of the file from the webshell, since every object inside the upload folder will inherit IIS AppPool\DemoPool write permissions; the web.config permissions should be changed to:

  • C:\> ICACLS <path>/web.config /inheritance:r /grant:r “IIS APPPOOL\DemoPool”:R Administrators:F

Famous last words:

As per the above examples, following best practices helps the security of the web server but in many cases this can lead to a dangerous false sense of security. Following any post blindly (this included) is not recommended. Continuously testing and modifying the the configuration untill it reaches the desired state (where the whole configuration as restricted as it can be) is generally a better approach, one which help create a truly secure and robust server.

Benefits of penetration testing

Wednesday, February 23rd, 2011

One of the questions that we get from time to time is “Why should I conduct a penetration test?” Undoubtedly every business works in a different way and the value of conducting a penetration test varies in each case. Some businesses might manage IT security in a different way than others and therefore a penetration test might be relevant in different ways. However, it is possible to find some common ground which will almost certainly apply to every organization.

The following list shows the main benefits of penetration testing:

  • Manage Risk Properly

For many organizations the foremost benefit of commissioning a penetration test is that it will give you a baseline to work upon in order to mitigate the risk in an structured and optimal way.

A penetration test will show you the vulnerabilities in the target system and the risks associated to it. An educated valuation of the risk will be performed so that the vulnerabilities can be reported as High/Medium/Low risk issues.

The categorization of the risk will allow you to tackle the highest risks first, maximising your resources and minimizing the risk efficiently.

  • Increase Business Continuity

Business continuity is usually the number one security concern for many organizations. A breach in the business continuity can happen due to a number of reasons. Lack of security is one of them.

Insecure systems are more likely to suffer a breach in their availability than secured and hardened ones. Vulnerabilities can very often be exploited to produce a denial of service condition which usually crashes the vulnerable service and breaches the server availability.

Penetration testing against mission critical systems needs to be coordinated, carefully planed and mindful in the execution.

  • Minimize Client-side Attacks

Penetration testing is an effective way of ensuring that successful highly targeted client-side attacks against key members of your staff are minimized.

Security should be treated with a holistic approach. Companies only assessing the security of their servers run the risk of being targeted with client-side attacks exploiting vulnerabilities in software like web browsers, pdf readers, etc. It is important to ensure that the patch management processes are working properly updating the Operating System and third party applications.

  • Protect Clients, Partners And Third Parties

A security breach could affect not only the target organization, but also their clients, partners and third parties working with it. Taking the necessary actions towards security will enhance professional relationships building up trust and confidence.

  • Comply With Regulation or Security Certification

The compliance section in the ISO27001 standard requires managers and system owners to perform regular security reviews and penetration tests, unertaken by competent testers.

PCI DSS also addresses penetration testing to relevant systems performed by qualified penetration testers.

  • Evaluate Security Investment

A snapshot of the current security posture and an opportunity to identify potential breach points.

The penetration test will provide you with an independent view of the effectiveness of your existing security processes in place, ensuring that patching and configuration management practices have been followed correctly.

This is an ideal opportunity to review the efficiency of the current security investment. What is working, what is not working and what needs to be improved.

  • Protect Public Relationships And Brand Issues

A good PR and brand position built up during years and with considerable investment can be suddenly change due to a security breach. Public perception of an organization is very sensitive to security issues and can have devastating consequences which may take years to repair.

As this post explains, there are very valid reasons to perform a penetration test in your infrastructure. Contact us if you need some more details on how we can help you.

Penetration testing – service or commodity

Monday, February 23rd, 2009

We face this kind of issue everyday. There are two different approaches to web application penetration tests:

  • An increasingly number of companies are buying automatic web scanners, run them, generate some results and put them in a report-shaped tin, ready to go to the client. No human interaction with the application is needed.
  • Some other companies allocate X numbers of days of a highly skilled consultant to assess the security of your web application. Among many other tests the consultant will also run automatic web scanners, but that is only scratching the surface of a real penetration test. The consultant will use all his/her experience to analyse many other factors of the application.

Penetration testing is all about assurance. In the first case the client will get some useful results, no doubt about it, but what level of assurance is it going to get? The report will cover the vulnerabilities discovered by XYZ software. Is that enough? I don’t think so, but that is for the client to decide. There is no question that the report will be incomplete and many issues will be missed.

In the second scenario the client can get the assurance that the results obtained were the work of a motivated attacker focused on the application security for X numbers of days. Is that enough? Again, it is up to the client to decide but in my opinion it gets so much closer to an acceptable assurance level.

It all depends on what do you want to be protected against. The decision in yours.

Practical attack against SSL certificates – Creating a rogue CA certificate

Tuesday, December 30th, 2008

In a presentation at the Chaos Communication Congress (Berlin, 27-30 December 2008) Alexander Sotirov, Marc Stevens and Jacob Appelbaum revealed how a weakness in the MD5 hashing algorithm could be used to create a rogue certificate.

Previous research showed the theory of this attack but this is the first practical implementation exploiting this flaw.

SSL uses server certificates to verify the identity of the server (this is the public key of the owner) and prevent man-in-the-middle attacks. When a user visits a secure (HTTPS) site the web browser retrieves the web server certificate issued by a CA (Certificate Authority). The fundamental security issue comes when a CA signs the certificate using a weak hashing function such as MD5.

Using “Chosen-prefix MD5 collisions” an attacker could manipulate a legitimate CA certificate and create a rogue one with arbitrary domain name with the same MD5 signature as the original one.

The researchers used a cluster of 200 PlayStation 3 to compute the correct MD5 hash. They used a field in the certificate called Netscape Comment Extension to inject the necessary code:

Injected code

Injected code

A sample of the certificate can be found in the following URL:

https://i.broke.the.internet.and.all.i.got.was.this.t-shirt.phreedom.org/

The impact of this attack is that an attacker could sign fully trusted certificates and conduct perfect man-in-the-middle attacks.

As anyone could generate this kind of certificates, revocation of known malicious certificates is not a possible option. SECFORCE recommends that the content of the Netscape Comment Extension field (and other similar fields) are checked before accepting a certificate.

Penetration testing and risk management – Consultants vs Monkeys

Thursday, October 30th, 2008

There are no doubts that penetration testing is becoming mainstream now. It looks like business are eventually concerned about security. Compared to some years ago the number of companies requesting penetration tests has increased exponentially and therefore the number of companies conducting them has incresed too.

One of the important problems affecting some penetration testing companies is that they conduct penetration tests with a very narrow perspective, they don’t put things into context. I call it monkey work. It is quite easy running an automated vulnerability scanner and produce a nice report. However, vulnerability scanners are not clever enough to know how a specific vulnerability affects a bussiness.

A typical example is XSS vulnerabilities. Depending on the context they can be devastating or just a minor issue. It is up to the penetration tester to decide how important this security issue is for the business. I call it consultant work and it is where risk management comes into the game.

At the end of the day a business man just cares about the business. If he/she is conducting a penetration test it is not due to the pleasure of learning about buffer overflows and injection vulnerabilities – it is because he/she thinks the penetration test is good for the business (due to a number of reasons such as clients trust, compliance, etc.).

Therefore what they really want to know about a security issues is:

  • What is the impact for the business
  • What is the likelihood of happening
  • How can be solved

What they are not interested in is:

  • Why stack protection mechanisms can not protect you from a heap overflow
  • How you control EIP on this exploit
  • Why a fuzzer would have never discovered that vulnerability
  • etc…

 
   
 
BLOG

Archives

April 2014
March 2014
February 2014
August 2013
June 2013
February 2013
January 2013
December 2012
November 2012
October 2012
January 2012
October 2011
September 2011
July 2011
June 2011
April 2011
February 2011
January 2011
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008

Categories

Aircraft security (1)
Business Continuity (2)
CREST (1)
cyber security (2)
Embedded devices security (1)
exploit (8)
Fuzzing (1)
Penetration Testing (40)
Phishing (3)
Risk Management (5)
SECFORCE (17)
Security architecture (2)
Security Books (1)
Security Compliance (1)
Security research (8)
social engineering (1)
social media (1)
sql injection (3)
SQL Server (3)
Tools (13)
Uncategorized (2)
Vulnerabilities (10)
 
SECFORCE - penetration testing
  SECFORCE - penetration testing Aegon House, 13 Lanark Square
Canary Wharf - E14 9QD, London
SECFORCE - penetration testing Direct Line +44 (0) 845 056 8694
E-mail SECFORCE - penetration testing
  Follow us in Twitter Check us out in LinkedIn SECFORCE is CREST certified. Click on the logo for more information ISO9001 ISO27001
SECFORCE - penetration testing
    Copyright (c) 2014 SECFORCE Ltd · All Rights Reserved