Fixing high Windows memory usage caused by large metafile

If you are having a windows 2012 server that seems to need rebooting from time to time and goes unresponsive, you may want to consider downloading RAMMAP and seeing if the “Metafile” is quite large when the server has been on for some time. This can happen on file servers with a lot of activity and you should install the Microsoft Dynamic Cache Service

found here: Dynamically manage the size of the Windows System File Cache – https://www.microsoft.com/en-sg/download/details.aspx?id=9258

you can get RAMMAP here: https://technet.microsoft.com/en-us/sysinternals/rammap.aspx

there is an excellent article where someone used this and it didn’t work so they created a script to more aggressively empty the sets to keep the server happy.

read it here: http://www.toughdev.com/viewpost.php?id=568&t=1

Long Term packet captures using Wireshark.

Over the years I used wireshark to capture packet traces on windows devices. It did the job and for the most part was an invaluable tool. Until that is, I had the need to capture packets over a period of time. Usually when troubleshooting an intermittent network problem. Just when I needed the tool to work, it had crashed hours, mins or days earlier and no one noticed. This week I had the same need and figured Wireshark had fixed this issue since it had been a year or so since I had needed to do a long term capture. Nope, thanks for letting me down again Wireshark. =)

 

This time I decided to see if there was another way to capture data more reliably. and I stumbled across dumpcap.exe you can find the manual here: https://www.wireshark.org/docs/man-pages/dumpcap.html

I chose to capture a file per hour and rotate over 25 files to give me some history to review when users complain of issues.

Example command here:

Dumpcap –I LAN –b duration:3600 –b files:25 –w y:\captures\packets.cap

The LAN is the name of the interface as reported in Wireshark (on Linux it may be eth0) and the –w specifies the file name, if you specify the number of files, dump cap will add to the filename as it rotates.

This configuration will result in a rolling 25 files that overwrite the oldest one.

You can also specify a libpcap filter to restrict the types of traffic captured by dumpcap. For example, the following command captures only DNS traffic destined to or coming from 208.67.220.220

$ dumpcap -i eth0 -f "host 208.67.220.220 and udp port 53" -w dns.cap

Hopefully this is useful to my readers..

 

Michael

Improving MSSQL performance

After hearing of complaints of poor application performance for one of my clients Healthcare Referral systems, i started looking into the underlying infrastructure only to find that there was no smoking gun. I ran some queries to see how the memory was being consumed by MSSQL since task manager pretty much just shows allocation, not usage. The output from these scripts appeared to show that no deadlocks were currently present, and that SQL really was using an awful lot of memory. One thing i have found is issues like these sometimes go away with a reboot and the root cause can be the default maximum memory settings allowing SQL to compete with the server for memory. I planned to implement some memory reservations and check with one of our DBA’s to see if he had other recommendations. He turned me on to this script that analyzes Microsoft SQL server for performance tuning  recommendations regarding indexes.

According the the author of the script (except taken from http://blog.sqlauthority.com/2011/01/03/sql-server-2008-missing-index-script-download/)

Performance Tuning is quite interesting and Index plays a vital role in it. A proper index can improve the performance and a bad index can hamper the performance.

Here is the script from my script bank which I use to identify missing indexes on any database.

Please note, if you should not create all the missing indexes this script suggest. This is just for guidance. You should not create more than 5-10 indexes per table. Additionally, this script sometime does not give accurate information so use your common sense.

Any way, the scripts is good starting point. You should pay attention to Avg_Estimated_Impact when you are going to create index. The index creation script is also provided in the last column.

 

One nice thing about the script is the output gives you the commands to create the indexes as well as information on expected performance increase expected. What I did was just work with our DBA to review the proposed indexes and create a small list that we would implement to gauge the performance impact. Since the server i am working on is SQL 2008 I am also settings memory reservations for the operating system since older MSSQL seems to have an issue when the server is starved for memory.

Read here for recommendations for reserving memory: http://www.sqlservercentral.com/blogs/glennberry/2009/10/29/suggested-max-memory-settings-for-sql-server-2005_2F00_2008/

Here are some of the queries I used to look into SQL Servers memory usage and troubleshoot the issue


 

SELECT name, value, value_in_use, [description]

FROM sys.configurations

WHERE name like ‘%server memory%’

ORDER BY name OPTION (RECOMPILE);

 


 

SELECT
(physical_memory_in_use_kb/1024) AS Memory_usedby_Sqlserver_MB,
(locked_page_allocations_kb/1024) AS Locked_pages_used_Sqlserver_MB,
(total_virtual_address_space_kb/1024) AS Total_VAS_in_MB,
process_physical_memory_low,
process_virtual_memory_low
FROM sys.dm_os_process_memory;


 

select
name
,sum(pages_allocated_count)/128.0 [Cache Size (MB)]
from sys.dm_os_memory_cache_entries
where pages_allocated_count > 0
group by name
order by sum(pages_allocated_count) desc


SELECT
DB_NAME(database_id) AS [Database Name]
,CAST(COUNT(*) * 8/1024.0 AS DECIMAL (10,2)) AS [Cached Size (MB)]
FROM sys.dm_os_buffer_descriptors WITH (NOLOCK)
WHERE database_id not in (1,3,4) — system databases
AND database_id <> 32767 — ResourceDB
GROUP BY DB_NAME(database_id)
ORDER BY [Cached Size (MB)] DESC OPTION (RECOMPILE);


 

SELECT * FROM sys.dm_os_memory_clerks ORDER BY (single_pages_kb + multi_pages_kb + awe_allocated_kb) desc

 

 


SELECT [name] AS [Name]
,[configuration_id] AS [Number]
,[minimum] AS [Minimum]
,[maximum] AS [Maximum]
,[is_dynamic] AS [Dynamic]
,[is_advanced] AS [Advanced]
,[value] AS [ConfigValue]
,[value_in_use] AS [RunValue]
,[description] AS [Description]
FROM [master].[sys].[configurations]
WHERE NAME IN (‘Min server memory (MB)’, ‘Max server memory (MB)’)


 

SELECT [total_physical_memory_kb] / 1024 AS [Total_Physical_Memory_In_MB]
,[available_page_file_kb] / 1024 AS [Available_Physical_Memory_In_MB]
,[total_page_file_kb] / 1024 AS [Total_Page_File_In_MB]
,[available_page_file_kb] / 1024 AS [Available_Page_File_MB]
,[kernel_paged_pool_kb] / 1024 AS [Kernel_Paged_Pool_MB]
,[kernel_nonpaged_pool_kb] / 1024 AS [Kernel_Nonpaged_Pool_MB]
,[system_memory_state_desc] AS [System_Memory_State_Desc]
FROM [master].[sys].[dm_os_sys_memory]


 

SELECT object_name, cntr_value
FROM sys.dm_os_performance_counters
WHERE counter_name = ‘Total Server Memory (KB)’;


 

Volatility – An advanced memory forensics framework

volatilityAre you involved in an Incident response engagement and need some free tools to complete your job? I have had good luck with Volatility Framework used in conjunction with Hibernation of the suspect endpoint.

The Volatility Framework is a collection of tools, implemented in Python under the GNU General Public License (GPL v2), for the extraction of digital artifacts from volatile memory (RAM) samples. The extraction techniques are performed completely independent of the system being investigated but offer unprecedented visibility into the runtime state of the system. The framework is intended to introduce people to the techniques and complexities associated with extracting digital artifacts from volatile memory samples and provide a platform for further work into this exciting area of research.

Get it here:

https://github.com/volatilityfoundation

Volatility supports memory dumps from all major 32- and 64-bit Windows versions and service packs including XP, 2003 Server, Vista, Server 2008, Server 2008 R2, Seven, 8, 8.1, Server 2012, and 2012 R2. Whether your memory dump is in raw format, a Microsoft crash dump, hibernation file, or virtual machine snapshot, Volatility is able to work with it. We also now support Linux memory dumps in raw or LiME format and include 35+ plugins for analyzing 32- and 64-bit Linux kernels from 2.6.11 – 3.16 and distributions such as Debian, Ubuntu, OpenSuSE, Fedora, CentOS, and Mandrake. We support 38 versions of Mac OSX memory dumps from 10.5 to 10.9.4 Mavericks, both 32- and 64-bit. Android phones with ARM processors are also supported.

Recommended Reading on the subject:

http://www.amazon.com/The-Art-Memory-Forensics-Detecting/dp/1118825098

Download the CheatSheet here:

New one is here:

http://volatility-labs.blogspot.com/2014/08/new-volatility-24-cheet-sheet-with.html

Old one is here:

https://code.google.com/p/volatility/downloads/detail?name=CheatSheet_v2.3.pdf

Other Malware Analyst tools here:

http://www.malwarecookbook.com/

Submit Malware for Analysis

https://malwr.com/submission/

https://www.virustotal.com/

Setup an automated Malware Analysis Lab

http://www.cuckoosandbox.org/

 

Building scalable web applications to a Windows IIS farm? Here is the solution to replicating the IIS metabase, SSL certs, Bindings, and code.

msdeploy

Web Deploy 3.5, the answer to scalability with web farms.

http://www.iis.net/downloads/microsoft/web-deploy

Similar to the use of Chef or Puppet, you can use Web Deploy 3.5 to push web server configuration and content to all you web servers in your farm. Previously i used IISCnfg to handle this back in the IIS6 days, but haven’t needed to do this on a windows web server for a bit. Fast forward to Windows server 2012, and you can use Web Deploy 3.5 to replicate the content the configuration etc to servers behind a load balancer.

Step 1: Install the web deploy code on both web servers

Step 2: Setup an account for use to replicate the settings

Step 3: assign permissions on IIS to allow the changes to be deployed

Step 4: create a scheduled task with the script to keep it in sync (I added -whatif to the code so it wont do anything unless you remove that part)

 

Here is the code i used:

msdeploy -verb:sync -source:webserver,computername=<FQDN of the source servername>,username=<domain\username>,password=<password>,authtype=ntlm -dest:auto,computername=<FQDN of the destination servername>  -whatif >c:\msdeploy.log

Obviously you will need to install the proper roles on the other web servers, and you can review the msdeploy.log to see which dependencies or roles need to be installed.

Here are some more examples from Microsoft on using msdeploy

http://www.iis.net/learn/publish/using-web-deploy/synchronize-iis

Change your default RDP client settings

Do you often connect to servers using RDP client and wish to have your local drives mapped, or have preferences for the display? The following steps walk you through making changes to the default settings.

Step 1

open your my documents folder and locate the hidden file Default.rdp.

Right click the file and select “edit”

Step 2

The RDP client will now open, make whatever change you would like to have as a default setting (for example mapping local drives)

rdp-settingsStep 3

Navigate to the general tab and click “save” under connection settings

rdp-settings2Now launch your RDP client (start -> run -> mstsc.exe)

The settings are now there by default.

rdp-settings

 

 

Microsoft EMET 5.2 released – Stop Malware in it’s tracks

EMETicon200-175

If you have endpoint security concerns you should do yourself a favor and look into Microsoft EMET 5.2. I am having early success in testing this and recommend you do the same. According to Microsoft, “There is no one tool capable of preventing all attacks. EMET is designed to make it more difficult, expensive and time consuming, and therefore less likely, for attackers to exploit a system.”

Here is an excerpt from the product download page:

The Enhanced Mitigation Experience Toolkit (EMET) is designed to help customers with their defense in depth strategies against cyberattacks, by helping detect and block exploitation techniques that are commonly used to exploit memory corruption vulnerabilities. EMET anticipates the most common actions and techniques adversaries might use in compromising a computer, and helps protect by diverting, terminating, blocking, and invalidating those actions and techniques. EMET helps protect your computer systems even before new and undiscovered threats are formally addressed by security updates and antimalware software. EMET benefits enterprises and all computer users by helping to protect against security threats and breaches that can disrupt businesses and daily lives.

Helps protect in a wide range of scenarios

EMET is compatible with most commonly used third-party applications at home and in the enterprise, from productivity software to music players. EMET works for a range of client and server operating systems used at home and in the enterprise**. When users browse secure HTTPS sites on the Internet or log on to popular social media sites, EMET can help further protect by validating Secure Sockets Layer (SSL) certificates against a set of user-defined rules.

emet list

Download it here: https://technet.microsoft.com/en-us/security/jj653751

In IT Security it helps to have a layered or “defense in depth” approach. I recommend you also look at Palo Alto “Traps” which is a more commercialized offering and has some unique improvements over what Microsoft is doing with EMET if you have budget for a tool.

Read more here: https://www.paloaltonetworks.com/products/endpoint-security.html

 

HOWTO: ping list of hosts from a file and output to txt

I used to do this with a batch file and for some reason it didnt work on Windows 7 SP1 today. So i asked a colleague for a powershell way to do this quickly. Props to Robert Ramsay over at NAMMCAL with the script.

copy the following and save in the c:\scripts folder as ping.ps1

$names = Get-Content “C:\scripts\list.txt”
foreach ($name in $names) {
if ( Test-Connection -ComputerName $name -Count 1 -ErrorAction SilentlyContinue ) {
Write-Host “$name is up” -ForegroundColor Magenta
}
else {
Write-Host “$name is down” -ForegroundColor Red
}
}

 

Next you will need to open a administrator elevated Powershell command prompt and set your execution level to unrestricted

set-executionpolicy unrestricted

Next in your powershell window you can change directory to c:\scripts and type ./ping.ps1 > output.txt and it will save the output to the file.  DONE

HP Array Configuration Utility (ACU) CLI commands

Had to look this up again today when troubleshooting an HP embedded array controller (P410i)

 

This issue i ran into was the event logs were displaying the following errors

Event ID 11

The driver detected a controller error on \Device\Harddisk1.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

and

Event ID 24606

Logical drive 2 configured on array controller P410i [Embedded] returned a fatal error during a read/write request from/to the volume.

Logical block address 6303024, block count 8 and command 32 were taken from the failed logical I/O request.

Array controller P410i [Embedded] is also reporting that the last physical drive to report a fatal error condition (associated with this logical request), is located on bus 0 and ID 3.

The HP Array Controller Utility showed everything was working properly. When i tried to run the diagnostics i was unable to do so as a VB error popped up. Instead of troubleshooting that issue (we are most likely going to replace this server with a VM in a month or so) I decided to look at the CLI utility instead. Could not remember the commands to show the config and the status. So off to google i went, here are the commands i found useful:

 

ctrl all show config

ctrl all show config detail

controller all show status