Below are my notes on what I thought was important out of the
OWASP Top 10 Web Application Security Risks for ASP.NET
SQL Injection
Havij for testing for SQL Injection on a web url
Encoding output
Use the appropriate encoding (escaping character or character sequences) for the context you are using the input.
Use AntiXssEncoder.HtmlEncode() to HTML encode input when using web forms. (Available in NuGet or .NET 4.5). For example, use this before rendering user input what to the screen. <%: %> Syntax will automatically encode it.
ASP.NET MVC Razor automatically encodes for HTML unless you tell it not to. For example, on Model/ViewModel add the [AllowHtml] attribute to the property you want to allow.
Use Microsoft.Security.Application.Encoder.JavaScriptEncode() to encode input for JavaScript. For example taking input and assigning to a JavaScript variable.
You can turn off request validation at the site level (web.config) or the page level. See
here for more details.
Hiding Payload
Url encoding can be used by hackers to get around XSS detectors and make the payload unclear to the average user. Another approach is to use a url shortener like tinyurl.com.
Session Persistence
Don't use the url for any sensitive information since they are in web logs, browser history, etc. A session id is definitely sensitive information that can allow someone else to be you while that session is still active. Cookies are a much safer place to pass and persist session ids. The only downside is cookies need to be enabled, but most people have cookies enabled. Cookies is the default behaviour for ASP.NET. Be sure to never send cookie with secret in it over an insecure connection.
Session Timeouts
In the case of a sliding forms timeout it is nice in that it can be extended forever by hitting the url and thus give a large window were someone can use a hijacked session.This is great for valid users and hackers love it too. So, turn off sliding timeout if you can to increase security.
The alternate is to set a fixed session timeout, but the problem is users will lose their session at the end of the timeout no matter what they are doing. There is no perfect solution for all cases. Set according to your needs to strike a balance between security and convenience for users. Do change default values to meet specific needs.
Indirect Reference Map
Indirect references can be used to conceal internal keys, but they are NEVER a substitute for access controls. Each internal id is replaced with a temporary indirect reference (that is stored on the server in the session for example and never exposed to the browser / user). This temporary indirect reference is cryptographically random and has no pattern for guessing. Once the session ends this mapping should expire. The map should be user specific so that it can't be used by any other user. This greatly reduces the ability for an attack by limiting who can use it and limiting the length of time it is valid and making it not guessable based on a pattern.
Thought on GUIDs. A GUID is not a map. It is unique and does not have a pattern. They should be viewed as obfuscation of the key. They are not user specific and proper Access Control is a MUST if GUIDs are used. They do have the advantage they they cannot be enumerated easily and close to being Globally Unique. A better choice would be to use System.Security.Cryptography.RNGCryptoGraphicProvider.GetBytes() then HttpServerUtility.UrlTokenEncode() and a map instead of a GUID.
Access Control
Just because the url is not visible in the browser's url bar don't assume that a hacker can't look at the source of the page for urls to hack. The best defense is to add code on the controller action that checks the direct and indirect referenced keys belong to the user that is sending them. For example, if the user was passing the id of the record to be displayed, the controller action that displays the record should check that that user has access to that record.
Cross Site Request Forgery
Put simply it is a way to trick the browser to making valid request from an evil site by exploiting the fact that cookies are sent for all requests from that domain. The evil site make a request that is identical in form to the request that the original site would have done, but with malicious payload. It could for example, take advantage of an authentication cookie that was not secured property.
ASP.NET MVC has a mechanism that adds randomness via a CSRF token. This token is known to the legitimate page where the form is as a hidden field and the to the browser via a cookie. When the request is sent to the server both the hidden value and the cookie are sent and the server compares them and they must match. The trick is the hacker won't know what the value to put in the form so the attack will fail. This is actually very effective protection.
To implement this, you need to do two things. Add the attriubute called ValidateAntiForgeryToken to the controller action. Also, add @AntiForgeryToken() to you view just inside the BeginForm() brackets.
Once you implement this, you will see a __RequestVerificationToken in the Request body when the request is made. There is also a matching cookie called __RequestVerificationToken. When the CRSF attack is executed the authentication cookie and the __ReuestVerificationToken cookies are sent with the attack request, but ASP.NET MVC returns a 500 error because there is no (or invalid) form field called __RequestVerificationToken in the request. The hacker doesn't have direct access to the cookies, but they are sent automatically by the browser. Thus the hacker has no way of know the CSRF token value. Attack has been stopped.
NOTE: There is also an Authorize attribute that is often the first line of defense, not as useful for CSRF attack since often authenticated users are the ones tricked into making the attacks.
NOTE: Checking the Referrer site can be helpful, but doesn't protect from CSRF.
Trace.axd
It contains such as cookies such as authToken, CSRF token, etc potentially connection string, versions of software, etc. Luckily the url is disabled by default. This can be enabled in MVC and Web Forms. Use a config transformation to remove the trace node on the Release configuration in case it is ever enabled in the web.config file.
Encrypt Connection String in web.config
It is good to have multiple layers of security. In case a hacker is able to access your web.config you want to limit what he can see. The connection string is quite important and should be encrypted.
To do so, run a command prompt as Administrator and execute:
aspnet-regiis -site "name of site in IIS" -app "app name or /" -pe "connectionStrings"
This uses the encryption key on the server and is server specific. So, it must be executed on that server.
The remaining risk is that someone can go to the server and run a the command to decrypt the string, so limit who can access the server.
Enable Retail Mode
Retail Mode prevents leaking of exception data on YSOD even when configuration is wrong for ALL applications on server. To do so, add the following deployment tag to the machine.config. This forces the same behavior as enabling customErrors. It is a safety net for all applications.
<system.web>
<deployment retail="true"/>
Password Rules
Make it harder for hackers by requiring longer passwords, not allowing words in the dictionary or variations of them, and require special characters, mixed case, numbers.
Storage of Passwords
If a hacker can get the username and salt and the hashed value and the operation used by the system to do the hashing then they can brute force attach and recover over 65% of the passwords (on average). To do this they simply call the hashing operation with a password from their list of likely passwords using the salt that they gained access to. Now they compare the hashes. If they are the same then they know the password that was originally used by the user. They now have everything needed to login as that user.
Membership Provider Default Implementation use near useless because we have both the hash and salt. Also SHA1 is too fast of an algorithm which means it can be cracked faster.
Be careful if someone has used your password on the internet and that hash has been compromised then a simple google search for that hash can show what the original password was. If a long salt was used then rainbow tables become useless. Salts make it harder to crash a hash, but it is just a matter of time and with Modern GPUs computing 7.5 billion hashes per second (in 2012) it is literally just a matter of time.
Check out Kerckhoffs' principle if you are wondering if your security is good. Basically it says, if you are not willing to give the design of how your security works and are thus relying on this lack of knowledge as part of the security then your security is not good enough. You must assume the attacker will learn how the security works soon enough.
Cryptography-Hashing
Faster algorithms are not good ironically. We actually want the hashing algorithm to take as long as we can stand. This is balanced with the processing required to login or register for example and how long it will take to hack. SHA1 is probably not sufficient anymore. One way around this is to apply the algorithm say 1000 times, but again that probably isn't enough. BCrypt allows us to directly control how many times we apply the algorithm.
No algorithm is fool proof especially when lists of password and hashes and salts are cataloged. It is all about making it too difficult / time consuming / financially unjustified, etc to do it. Even the most complex algorithms can be circumvented by cataloging the input and outputs.
Cyptography- Encryption
Encryption is less desirable than hashing because if the key is obtained then all the encrypted values can be decrypted and the original values are available. With Hashing it is a one way process and the original value cannot be obtained using the algorithm (must hack as noted earlier).
The trick with encryption is that you must manage the safety of the key. DPAPI uses the machine key on the server the code is running on to do the encryption and decryption (symmetric algorithm) so the application does not have to.
Password Hacking tools
hashcat - advanced password recovery - brute force for comparing hashes and common hashing algorithms
RainbowCrack - Use rainbow tables to get a pregenerated list of hashes meeting different rules for password.
Restricting Urls in MVC
Don't use web.config location tag permission restrictions because it is based on url, not page. In MVC more than one route can point to the same page, but only the urls used to access them is protected in the location tag in web.config. This means if you have two routes you have to have to location tags in the web.config. This could lead to very buggy and inconsistent access. It worked well in Web Forms because the page is the url.
For MVC you want to protect the Controller and actions of the controller. The way to do that is the Authorize attribute. Just using it requires authenticated users. If you want to restrict to a role you can comma separate list them as a parameter to the Authorize attribute.
Don't forget to protect resources like JavaScript, reports, ajax calls, reports, API's, pdfs, etc. Really anything that is not in the browsers url bar, but does show up in the network traffic.
Side note: Never send sensitive data in the url because they can be in web server logs, error logs, history, etc.
Just because a url can't be guessed doesn't make it secure. Urls get leaked through many different routes such that they don't have to be guessed.
Insufficient Transport Layer Protection
If TLS (Transport Layer Security) is not properly done it opens up the opportunity for a MiTM (Man in the Middle) attack. This can be done by physically tap an ethernet cable, intercept traffic at the ISP level, monitor unprotected traffic at a wifi hotspot, or create a rogue wireless access point.
With MiTM the attacker can see all http (not https though this is debatable) traffic from the victim. This would include any cookies that are not secured with HTTPS. All cookies that have sensitive information (arguably all do) should only be allowed to be sent via HTTPS (and the cookie not sent if http is used).
To do this in ASP.NET WebForms you need to set the cookie with Secure to Yes by changing the web.config by adding a <httpCookies requireSSL="true" />. It can also be overridden per request using Response.Cookies.Add(). When you do this if you hit a page that uses HTTP the cookie will not be sent. If this is the authentication cookie it will ask you to log in again (won't be able to) if you access a http link from and https page because the auth cookie is not sent for the http page.
You can also require HTTPS per controller action by adding a RequireHttps. This will make sure the url cannot be access to HTTP. It is best to direct them to the HTTPS version instead of relying on a redirect from http to https.
As a best practice and to avoid a browser warning that the entire site should use http instead a mixed of http and https. This can show up when the site requires https, but a script tag references http:// as the source. A clean way around this is to use protocol relative url. To do this set source="//hostname/somelib.js". Basically just remove the http: or https: from the url. It will then assume the protocol of the page (i.e. https). Be sure the url you are access supports both http and https.
As a side note, when using a load balancer the request comes to the load balancer as HTTPS, but by the time it gets to the web server itself it is no longer HTTPS (it is HTTP). In this case, the load balancer typically adds HTTP
X-forwarded-proto header headers instead of built in secure cookie attribute. This will require custom implementation.
HSTS is HTTP strict transport security and tells the browser to never make a http request to the site. The header
Strict-Transport-Security does this. It has its limitations and patchy in browsers, but is still a good line of defense. it does require the certificate to be trusted.
HSTS can be implemented in an ASP.NET MVC application by adding the following to your web.config
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Strict-Transport-Security" value="max-age=31536000"/>
</customHeaders>
</httpProtocol>
</system.webServer>
Do NOT show a login form over HTTP (use HTTPS), even if the post is to HTTPS. The reason is that a MiTM attack could inject code on the page to do something like get form values (username and password) and send them off to another location in parallel to the normal submission.
Do NOT load HTTPS login forms inside an iframe on a HTTP page because the parent page is vulnerable and could be manipulated to load a different login form into the iframe. A better choice would be to show the actual login in a full screen window so users can see url and the secure icon in the browser.
Do NOT put username and passwords in urls because they are often in web server logs in plain text, etc.
Unvalidated Redirects and Forwards = bad reputation
They are useful to attackers because the unvalidated redirects abuse the trust the victim has in the site they trust.
Imagine you have a site and you want to track when a user clicks on a link so you have a redirect action on your controller that takes a target url that the user is to be redirected to. Another is how ASP.NET takes the user to the login page when they try to access a page that they don't have access to. In this case, the target url is an internal url or atleast is expected to be.
The problem comes when a hacker manages to change the target url (when it is not validated) in the redirect link. From the users perspective the site they are on is one they likely trust or looks legitimate, but when the hacker gets to change the target url they can take them where ever they want. The user is harmed in this scenario and the site with this link is also.
Now imagine the user receives a spam email and the link is something like http://1.usa.gov/OYCBM7. It could be via email, social media, compromised legitimate sites, etc. This could come from a url shortner site to make it difficult to tell where it is coming from. It also has a .gov so people will likely trust it. The long version of the url could be something like http://trustedsite.com/redirect?url=http://evilsite.com/malware. It could also have the query string encoded to obfuscate it.This also gets by blacklist detectors.
If you don't validate redirects and forwards on your site you are a potential target hackers to use your site for their evil ways. To protect your site, you need to validate the querystring before you redirect to the target site. The best way to do this is have a white list of acceptable urls. The white list could be a regex, string literal, int, list, etc. This would be in the action on the controller, etc where the redirect is done.
There are scenario where you can't use a white list. In this case, you can check the referrer (UrlReferrer) in the request to determine if the user came from our site or some other site. In particular, the UrlReferrer will be null when request came from a non-browser request such as email client, twitter client, pasting into url bar in browser, etc. We can also check if the request is from our site (Request.UrlReferrer.Host != Request.Url.Host).
This is not 100% risk free. For example, Referrer header value can be faked / changed to be whatever and circumvent checks. This is not what happens when a victim follows a link though.
Security Related Sites
nakedsecurity
hak5
Troy Hunt