• Books Get Your Hands Dirty on Clean Architecture Stratospheric
  • Contribute Become an Author Writing Guide Author Workflow Author Payment
  • Services Book me Advertise
  • Categories Spring Boot Java Node Kotlin AWS Software Craft Simplify! Meta Book Reviews

Creating and Analyzing Java Heap Dumps

  • March 1, 2021

As Java developers, we are familiar with our applications throwing OutOfMemoryErrors or our server monitoring tools throwing alerts and complaining about high JVM memory utilization.

To investigate memory problems, the JVM Heap Memory is often the first place to look at.

To see this in action, we will first trigger an OutOfMemoryError and then capture a heap dump. We will next analyze this heap dump to identify the potential objects which could be the cause of the memory leak.

Example Code

What is a heap dump.

Whenever we create a Java object by creating an instance of a class, it is always placed in an area known as the heap. Classes of the Java runtime are also created in this heap.

The heap gets created when the JVM starts up. It expands or shrinks during runtime to accommodate the objects created or destroyed in our application.

When the heap becomes full, the garbage collection process is run to collect the objects that are not referenced anymore (i.e. they are not used anymore). More information on memory management can be found in the Oracle docs .

Heap dumps contain a snapshot of all the live objects that are being used by a running Java application on the Java heap. We can obtain detailed information for each object instance, such as the address, type, class name, or size, and whether the instance has references to other objects.

Heap dumps have two formats:

  • the classic format, and
  • the Portable Heap Dump (PHD) format.

PHD is the default format. The classic format is human-readable since it is in ASCII text, but the PHD format is binary and should be processed by appropriate tools for analysis.

Sample Program to Generate an OutOfMemoryError

To explain the analysis of a heap dump, we will use a simple Java program to generate an OutOfMemoryError :

We keep on allocating the memory by running a for loop until a point is reached, when JVM does not have enough memory to allocate, resulting in an OutOfMemoryError being thrown.

Finding the Root Cause of an OutOfMemoryError

We will now find the cause of this error by doing a heap dump analysis. This is done in two steps:

  • Capture the heap dump
  • Analyze the heap dump file to locate the suspected reason.

We can capture heap dump in multiple ways. Let us capture the heap dump for our example first with jmap and then by passing a VM argument in the command line.

Generating a Heap Dump on Demand with jmap

jmap is packaged with the JDK and extracts a heap dump to a specified file location.

To generate a heap dump with jmap , we first find the process ID of our running Java program with the jps tool to list down all the running Java processes on our machine:

Next, we run the jmap command to generate the heap dump file:

After running this command the heap dump file with extension hprof is created.

The option live is used to collect only the live objects that still have a reference in the running code. With the live option, a full GC is triggered to sweep away unreachable objects and then dump only the live objects.

Automatically Generating a Heap Dump on OutOfMemoryError s

This option is used to capture a heap dump at the point in time when an OutOfMemoryError occurred. This helps to diagnose the problem because we can see what objects were sitting in memory and what percentage of memory they were occupying right at the time of the OutOfMemoryError .

We will use this option for our example since it will give us more insight into the cause of the crash.

Let us run the program with the VM option HeapDumpOnOutOfMemoryError from the command line or our favorite IDE to generate the heap dump file:

After running our Java program with these VM arguments, we get this output:

As we can see from the output, the heap dump file with the name: hdump.hprof is created when the OutOfMemoryError occurs.

Other Methods of Generating Heap Dumps

Some of the other methods of generating a heap dump are:

jcmd : jcmd is used to send diagnostic command requests to the JVM. It is packaged as part of the JDK. It can be found in the \bin folder of a Java installation.

JVisualVM : Usually, analyzing heap dump takes more memory than the actual heap dump size. This could be problematic if we are trying to analyze a heap dump from a large server on a development machine. JVisualVM provides a live sampling of the Heap memory so it does not eat up the whole memory.

Analyzing the Heap Dump

What we are looking for in a Heap dump is:

  • Objects with high memory usage
  • Object graph to identify objects of not releasing memory
  • Reachable and unreachable objects

Eclipse Memory Analyzer (MAT) is one of the best tools to analyze Java heap dumps. Let us understand the basic concepts of Java heap dump analysis with MAT by analyzing the heap dump file we generated earlier.

We will first start the Memory Analyzer Tool and open the heap dump file. In Eclipse MAT, two types of object sizes are reported:

  • Shallow heap size : The shallow heap of an object is its size in the memory
  • Retained heap size : Retained heap is the amount of memory that will be freed when an object is garbage collected.

Overview Section in MAT

After opening the heap dump, we will see an overview of the application’s memory usage. The piechart shows the biggest objects by retained size in the overview tab as shown here:

PieChart

For our application, this information in the overview means if we could dispose of a particular instance of java.lang.Thread we will save 1.7 GB, and almost all of the memory used in this application.

Histogram View

While that might look promising, java.lang.Thread is unlikely to be the real problem here. To get a better insight into what objects currently exist, we will use the Histogram view:

histogram

We have filtered the histogram with a regular expression “io.pratik.* " to show only the classes that match the pattern. With this view, we can see the number of live objects: for example, 243 BrandedProduct objects, and 309 Price Objects are alive in the system. We can also see the amount of memory each object is using.

There are two calculations, Shallow Heap and Retained Heap. A shallow heap is the amount of memory consumed by one object. An Object requires 32 (or 64 bits, depending on the architecture) for each reference. Primitives such as integers and longs require 4 or 8 bytes, etc… While this can be interesting, the more useful metric is the Retained Heap.

Retained Heap Size

The retained heap size is computed by adding the size of all the objects in the retained set. A retained set of X is the set of objects which would be removed by the Garbage Collector when X is collected.

The retained heap can be calculated in two different ways, using the quick approximation or the precise retained size:

retainedheap

By calculating the Retained Heap we can now see that io.pratik.ProductGroup is holding the majority of the memory, even though it is only 32 bytes (shallow heap size) by itself. By finding a way to free up this object, we can certainly get our memory problem under control.

Dominator Tree

The dominator tree is used to identify the retained heap. It is produced by the complex object graph generated at runtime and helps to identify the largest memory graphs. An Object X is said to dominate an Object Y if every path from the Root to Y must pass through X.

Looking at the dominator tree for our example, we can see which objects are retained in the memory.

dominatortree

We can see that the ProductGroup object holds the memory instead of the Thread object. We can probably fix the memory problem by releasing objects contained in this object.

Leak Suspects Report

We can also generate a “Leak Suspects Report” to find a suspected big object or set of objects. This report presents the findings on an HTML page and is also saved in a zip file next to the heap dump file.

Due to its smaller size, it is preferable to share the “Leak Suspects Report” report with teams specialized in performing analysis tasks instead of the raw heap dump file.

The report has a pie chart, which gives the size of the suspected objects:

leakssuspectPieChart

For our example, we have one suspect labeled as “Problem Suspect 1” which is further described with a short description:

leakssuspects

Apart from the summary, this report also contains detailed information about the suspects which is accessed by following the “details” link at the bottom of the report:

leakssuspectdetails

The detailed information is comprised of :

Shortest paths from GC root to the accumulation point : Here we can see all the classes and fields through which the reference chain is going, which gives a good understanding of how the objects are held. In this report, we can see the reference chain going from the Thread to the ProductGroup object.

Accumulated Objects in Dominator Tree : This gives some information about the content which is accumulated which is a collection of GroceryProduct objects here.

In this post, we introduced the heap dump, which is a snapshot of a Java application’s object memory graph at runtime. To illustrate, we captured the heap dump from a program that threw an OutOfMemoryError at runtime.

We then looked at some of the basic concepts of heap dump analysis with Eclipse Memory Analyzer: large objects, GC roots, shallow vs. retained heap, and dominator tree, all of which together will help us to identify the root cause of specific memory issues.

phd file heap dump

Software Engineer, Consultant and Architect with current expertise in Enterprise and Cloud Architecture, serverless technologies, Microservices, and Devops.

Recent Posts

Guide to JUnit 5 Functional Interfaces

Guide to JUnit 5 Functional Interfaces

Sachin Raverkar

  • July 12, 2024

In this article, we will get familiar with JUnit 5 functional interfaces. JUnit 5 significantly advanced from its predecessors. Features like functional interfaces can greatly simplify our work once we grasp their functionality.

Getting Started with Spring Security and JWT

Getting Started with Spring Security and JWT

Ranjani Harish

  • June 19, 2024

Spring Security provides a comprehensive set of security features for Java applications, covering authentication, authorization, session management, and protection against common security threats such as CSRF (Cross-Site Request Forgery).

Creating and Publishing an NPM Package with Automated Versioning and Deployment

Creating and Publishing an NPM Package with Automated Versioning and Deployment

Olaoluwa Ajibade

  • June 16, 2024

In this step-by-step guide, we’ll create, publish, and manage an NPM package using TypeScript for better code readability and scalability. We’ll write test cases with Jest and automate our NPM package versioning and publishing process using Changesets and GitHub Actions.

Andy Balaam's Blog

Four in the morning, still writing Free Software

How to analyse a .phd heap dump from an IBM JVM

Share on Mastodon

If you have been handed a .phd file which is a dump of the heap of an IBM Java virtual machine, you can analyse it using the Eclipse Memory Analyzer Tool (MAT), but you must install the IBM Monitoring and Diagnostic Tools first.

Download MAT from eclipse.org/mat/downloads.php . I suggest the Standalone version.

Unzip it and run the MemoryAnalyzer executable inside the zip. Add an argument to control how much memory it gets e.g. to give it 4GB:

Once it’s started, go to Help -> Install new software.

Next to “Work with” paste in the URL for the IBM Developer Toolkit update site: http://public.dhe.ibm.com/ibmdl/export/pub/software/websphere/runtimes/tools/dtfj/

Click Add…

Type in a name like “IBM Monitoring and Diagnostic Tools” and click OK.

In the list below, an item should appear called IBM Monitoring and Diagnostic Tools. Tick the box next to it, click Next, and follow the wizard to accept the license agreements and install the toolkit.

Restart Eclipse when prompted.

Choose File -> Open Heap Dump and choose your .phd file. It should open in MAT and allow you to figure out who is using all that memory.

9 thoughts on “How to analyse a .phd heap dump from an IBM JVM”

Very helpful guide.

Very nice buddy! Thank you!

If need any help on HEAPDUMP and JAVACORE for WebSphere products, please contact me!

https://www.linkedin.com/in/dougcardoso21/

Thanks Douglas!

Thanks for this… IBM product is garbage (no pun intended).

Thanks you !!

When I tried to update ini file to 4g, it did not open MAT. I needed to reset to what it was that is 1024m and then when I opened this phd file, it gave an error that Error opening heap dump is encountered. Does someone know what to do?

Very helpful , thanks.

Hi Poonam If you are on windows, type cmd in the search , go into command prompt. In the command prompt , change directory (cd) to the directory that MemoryAnalyser.exe is in. Then type MemoryAnalyzer -vmargs -Xmx4g and press enter.

  • Pingback: ¿Cómo crear un volcado de almacenamiento dinámico compatible con OpenJ9 a través de API? – stack

Leave a Reply

Your email address will not be published. Required fields are marked *

Don't subscribe All new comments Replies to my comments Notify me of followup comments via e-mail. You can also subscribe without commenting.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Differences Between Heap Dump, Thread Dump and Core Dump

Last updated: January 8, 2024

phd file heap dump

  • Performance

announcement - icon

Mocking is an essential part of unit testing, and the Mockito library makes it easy to write clean and intuitive unit tests for your Java code.

Get started with mocking and improve your application tests using our Mockito guide :

Download the eBook

Handling concurrency in an application can be a tricky process with many potential pitfalls . A solid grasp of the fundamentals will go a long way to help minimize these issues.

Get started with understanding multi-threaded applications with our Java Concurrency guide:

>> Download the eBook

Spring 5 added support for reactive programming with the Spring WebFlux module, which has been improved upon ever since. Get started with the Reactor project basics and reactive programming in Spring Boot:

>> Download the E-book

Baeldung Pro comes with both absolutely No-Ads as well as finally with Dark Mode , for a clean learning experience:

>> Explore a clean Baeldung

Once the early-adopter seats are all used, the price will go up and stay at $33/year.

Azure Container Apps is a fully managed serverless container service that enables you to build and deploy modern, cloud-native Java applications and microservices at scale. It offers a simplified developer experience while providing the flexibility and portability of containers.

Of course, Azure Container Apps has really solid support for our ecosystem, from a number of build options, managed Java components, native metrics, dynamic logger, and quite a bit more.

To learn more about Java features on Azure Container Apps, visit the documentation page .

You can also ask questions and leave feedback on the Azure Container Apps GitHub page .

Modern software architecture is often broken. Slow delivery leads to missed opportunities, innovation is stalled due to architectural complexities, and engineering resources are exceedingly expensive.

Orkes is the leading workflow orchestration platform built to enable teams to transform the way they develop, connect, and deploy applications, microservices, AI agents, and more.

With Orkes Conductor managed through Orkes Cloud, developers can focus on building mission critical applications without worrying about infrastructure maintenance to meet goals and, simply put, taking new products live faster and reducing total cost of ownership.

Try a 14-Day Free Trial of Orkes Conductor today.

To learn more about Java features on Azure Container Apps, you can get started over on the documentation page .

And, you can also ask questions and leave feedback on the Azure Container Apps GitHub page .

Whether you're just starting out or have years of experience, Spring Boot is obviously a great choice for building a web application.

Jmix builds on this highly powerful and mature Boot stack, allowing devs to build and deliver full-stack web applications without having to code the frontend. Quite flexibly as well, from simple web GUI CRUD applications to complex enterprise solutions.

Concretely, The Jmix Platform includes a framework built on top of Spring Boot, JPA, and Vaadin , and comes with Jmix Studio, an IntelliJ IDEA plugin equipped with a suite of developer productivity tools.

The platform comes with interconnected out-of-the-box add-ons for report generation, BPM, maps, instant web app generation from a DB, and quite a bit more:

>> Become an efficient full-stack developer with Jmix

DbSchema is a super-flexible database designer, which can take you from designing the DB with your team all the way to safely deploying the schema .

The way it does all of that is by using a design model , a database-independent image of the schema, which can be shared in a team using GIT and compared or deployed on to any database.

And, of course, it can be heavily visual, allowing you to interact with the database using diagrams, visually compose queries, explore the data, generate random data, import data or build HTML5 database reports.

>> Take a look at DBSchema

Get non-trivial analysis (and trivial, too!) suggested right inside your IDE or Git platform so you can code smart, create more value, and stay confident when you push.

Get CodiumAI for free and become part of a community of over 280,000 developers who are already experiencing improved and quicker coding.

Write code that works the way you meant it to:

>> CodiumAI. Meaningful Code Tests for Busy Devs

The AI Assistant to boost Boost your productivity writing unit tests - Machinet AI .

AI is all the rage these days, but for very good reason. The highly practical coding companion, you'll get the power of AI-assisted coding and automated unit test generation . Machinet's Unit Test AI Agent utilizes your own project context to create meaningful unit tests that intelligently aligns with the behavior of the code. And, the AI Chat crafts code and fixes errors with ease, like a helpful sidekick.

Simplify Your Coding Journey with Machinet AI :

>> Install Machinet AI in your IntelliJ

Let's get started with a Microservice Architecture with Spring Cloud:

Download the Guide

Since its introduction in Java 8, the Stream API has become a staple of Java development. The basic operations like iterating, filtering, mapping sequences of elements are deceptively simple to use.

But these can also be overused and fall into some common pitfalls.

To get a better understanding on how Streams work and how to combine them with other language features, check out our guide to Java Streams:

Download the E-book

Do JSON right with Jackson

Get the most out of the Apache HTTP Client

Get Started with Apache Maven:

Working on getting your persistence layer right with Spring?

Explore the eBook

Building a REST API with Spring?

Get started with Spring and Spring Boot, through the Learn Spring course:

Explore Spring Boot 3 and Spring 6 in-depth through building a full REST API with the framework:

>> The New “REST With Spring Boot”

Get started with Spring and Spring Boot, through the reference Learn Spring course:

>> LEARN SPRING

Yes, Spring Security can be complex, from the more advanced functionality within the Core to the deep OAuth support in the framework.

I built the security material as two full courses - Core and OAuth , to get practical with these more complex scenarios. We explore when and how to use each feature and code through it on the backing project .

You can explore the course here:

>> Learn Spring Security

1. Overview

A dump is data queried from a storage medium and stored somewhere for further analysis . The Java Virtual Machine (JVM) helps to manage memory in Java, and in the case of errors, we can get a dump file from the JVM to diagnose errors.

In this tutorial, we’ll explore three common Java dump files – heap dump, thread dump, and core dump – and understand their use cases.

2. Heap Dump

During runtime, the JVM creates the heap, which contains references to objects in use in a running Java application. The heap dump contains a saved copy of the current state of all objects in use at runtime .

Additionally, it’s used to analyze the OutOfMemoryError errors in Java .

Furthermore, the heap dump can be in two formats – the classic format and the Portable Heap Format (PHD) .

The classic format is human-readable, while the PHD is in binary and needs tools for further analysis. Also, PHD is the default for a heap dump.

Moreover, modern heap dumps also contain some thread information. Starting from Java 6 update 14, a heap dump also contains stack traces for threads. The stack traces in the heap dump connect objects to the threads using them .

Analysis tools like Eclipse Memory Analyzer include support to retrieve this information.

2.1. Use Case

Heap dumps can help when analyzing OutOfMemoryError in a Java application .

Let’s see some example code that throws OutOfMemoryError :

In the example code above, we create a scenario of an infinite loop until the heap memory is full. As we know, the new keyword helps to allocate memory on the heap in Java.

To capture the heap dump of the code above, we’ll need a tool. One of the most used tools is jmap .

First, we need to get the process ID of all running Java processes on our machine by running the jps command:

The command above outputs to the console all running Java processes:

Here, our process of interest is HeapDump. Therefore, let’s run the jmap command with the HeapDump process ID to capture the heap dump:

The command above generates the hdump.hprof file in the project root directory.

Finally, we can use tools like Eclipse Memory Analyzer to analyze the dump file .

3. Thread Dump

The thread dump contains the snapshot of all threads in a running Java program at a specific instant .

A thread is the smallest part of a process that helps a program to operate efficiently by running multiple tasks concurrently.

Furthermore, a thread dump can help diagnose efficiency issues in a Java application . Thus, it’s a vital tool for analyzing performance issues, especially when an application is slow.

Additionally, it can help detect threads stuck in an infinite loop . It can also help identify deadlocks , where multiple threads are waiting for one other to release resources .

Additionally, it can identify a situation where certain threads aren’t getting enough CPU time. This can help identify performance bottlenecks.

3.1. Use Case

Here’s an example program that can potentially have a slow performance due to a long-running task:

In the sample code above, we create a method that loops through to Integer.MAX_VALUE  and outputs the value to the console. This is a long-running operation and will potentially be a performance issue .

To analyze the performance, we can capture the thread dump . First, let’s find the process ID of all running Java programs:

The jps  command outputs all Java processes to the console:

We have an interest in the ThreadDump process ID. Next, let’s use the jstack command with the process ID to take the thread dump :

The command above captures the thread dump and saves it in a txt file for further analysis .

4. Core Dump

The core dump, also known as the crash dump, contains the snapshot of a program when the program crashed or abruptly terminated .

The JVM runs bytecode and not native code. Hence, Java code cannot cause core dumps.

However, some Java programs use Java Native Interface (JNI) to run native code directly. It’s possible for the JNI to crash the JVM because external libraries can crash. We can take the core dump at that instant and analyze it.

Furthermore, a core dump is an OS-level dump and can be used to find the details of native calls when a JVM crashes .

4.1. Use Case

Let’s see an example that generates a core dump using JNI.

First, let’s create a class named CoreDump  to load a native library:

Next, let’s compile the Java code using the javac command:

Then, let’s generate a header for native method implementation by running the javac -h command:

Finally, let’s implement a native method in C that will crash the JVM:

Let’s compile the native code by running the gcc command:

This generates shared libraries named libnativelib.so . Next, let’s compile the Java code with the shared libraries:

The native method crashed the JVM and generated a core dump in the project directory:

The above output shows the crash information and the location of the dump file.

5. Key Differences

Here’s a summary table showing the key differences between three types of Java dump files:

Dump Type Use Case Contains
Heap Dump Diagnose memory issues like Snapshot of objects in the Java heap
Thread Dump Troubleshoot performance issues, thread deadlocks, and infinite loops Snapshot of all thread states in the JVM
Core Dump Debug crashes caused by native libraries Process state when JVM crashes

6. Conclusion

In this article, we learned the differences between heap dump, thread dump, and core dump by looking at their uses. Additionally, we saw example code with different issues and generated a dump file for further analysis. Each dump file serves a different purpose for troubleshooting Java applications.

As always, the source code for the examples is available over on GitHub .

Explore the secure, reliable, and high-performance Test Execution Cloud built for scale. Right in your IDE:

Basically, write code that works the way you meant it to.

AI is all the rage these days, but for very good reason. The highly practical coding companion, you'll get the power of AI-assisted coding and automated unit test generation . Machinet's Unit Test AI Agent utilizes your own project context to create meaningful unit tests that intelligently aligns with the behavior of the code.

>>Download the E-book

Get started with Spring Boot and with core Spring, through the Learn Spring course:

>> CHECK OUT THE COURSE

The Apache HTTP Client is a very robust library, suitable for both simple and advanced use cases when testing HTTP endpoints . Check out our guide covering basic request and response handling, as well as security, cookies, timeouts, and more:

RWS Course Banner

  • Configuring your system
  • JIT Compiler
  • JITServer technology
  • JITServer tuning
  • AOT Compiler
  • Java Attach API
  • System dump
  • Java 11 API
  • Java 17 API
  • Java 21 API
  • Java 23 API

Heap dumps contain a snapshot of all the live objects that are being used by a running Java™ application on the Java heap. You can obtain detailed information for each object instance, such as the address, type, class name, or size, and whether the instance has references to other objects.

There are two formats for heap dumps; the classic format and the Portable Heap Dump (PHD) format, which is the default. Whilst the classic format is generated in ascii text and can be read, the PHD format is binary and and must be processed for analysis.

Obtaining dumps

Heap dumps are generated by default in PHD format when the Java heap runs out of space. If you want to trigger the production of a heap dump in response to other situations, or in classic format, you can use one of the following options:

  • Configure the heap dump agent. For more information, see the -Xdump option.
  • Use the com.ibm.jvm.Dump API programmatically in your application code. For more information, see the JVM diagnostic utilities API documentation .

Analyzing dumps

The best method to analyze a PHD heap dump is to use the Eclipse Memory Analyzer™ tool (MAT) or the IBM Memory Analyzer tool . These tools process the dump file and provide a visual representation of the objects in the Java Heap. Both tools require the Diagnostic Tool Framework for Java (DTFJ) plug-in. To install the DTFJ plug-in in the Eclipse IDE, select the following menu items:

The following sections contain detailed information about the content of each type of heap dump file.

Portable Heap Dump (PHD) format

A PHD format dump file contains a header section and a body section. The body section can contain information about object, array, or class records. Primitive numbers are used to describe the file format, as detailed in the following table:

Primitive number Length in bytes
1
2
4
8
4 (32-bit platforms) or 8 (64-bit platforms)

General structure

The following structure comprises the header section of a PHD file:

  • A UTF string indicating that the file is a portable heap dump
  • An int containing the PHD version number
  • 1 indicates that the word length is 64-bit.
  • 2 indicates that all the objects in the dump are hashed. This flag is set for heap dumps that use 16-bit hash codes. Eclipse OpenJ9™ heap dumps use 32-bit hash codes that are created only when used. For example, these hash codes are created when the APIs Object.hashCode() or Object.toString() are called in a Java application. If this flag is not set, the presence of a hash code is indicated by the hash code flag on the individual PHD records.
  • 4 indicates that the dump is from an OpenJ9 VM.
  • A byte containing a tag with a value of 1 that indicates the start of the header.
  • header tag 1 - not used
  • header tag 2 - indicates the end of the header
  • header tag 3 - not used
  • header tag 4 - indicates the VM version (Variable length UTF string)

The body of a PHD file is indicated by a byte that contains a tag with a value of 2, after which there are a number of dump records. Dump records are preceded by a 1 byte tag with the following record types:

  • Short object: 0x80 bit of the tag is set
  • Medium object: 0x40 bit of the tag is set (top bit value is 0)
  • Primitive Array: 0x20 bit if the tag is set (all other tag values have the top 3 bits with a value of 0)
  • Long record: tag value is 4
  • Class record: tag value is 6
  • Long primitive array: tag value is 7
  • Object array: tag value is 8

These records are described in more detail in the sections that follow.

The end of the PHD body is indicated by a byte that contains a tag with a value of 3.

Object records

Object records can be short, medium, or long, depending on the number of object references in the heap dump.

1. Short object record

The following information is contained within the tag byte:

The 1 byte tag, which consists of the following bits:

Bit number Value or description
1 Bit is set (0x80)
2 and 3 Indicates the class cache index. The value represents an index into a cache of the last 4 classes used.
4 and 5 Contain the number of references. Most objects contain 0 - 3 references. If there are 4 - 7 references, the is used. If there are more than 7 references, the is used.
6 Indicates whether the gap is a 1 value or a . The gap is the difference between the address of this object and the previous object. If set, the gap is a . If the gap does not fit into a , the format is used.
7 and 8 Indicates the size of each reference (0= , 1= , 2= , 3= )

A byte or a short containing the gap between the address of this object and the address of the preceding object. The value is signed and represents the number of 32-bit words between the two addresses. Most gaps fit into 1 byte.

  • If all objects are hashed, a short containing the hash code.
  • The array of references, if references exist. The tag shows the number of elements, and the size of each element. The value in each element is the gap between the address of the references and the address of the current object. The value is a signed number of 32-bit words. Null references are not included.

2. Medium object record

These records provide the actual address of the class rather than a cache index. The following format is used:

The 1 byte tag, consisting of the following bits:

Bit number Value or description
1 0
2 Set (0x40)
3, 4, and 5 Contain the number of references
6 Indicates whether the gap is a 1 value or a (see description)
7 and 8 Indicates the size of each reference (0= , 1= , 2= , 3= )

A byte or a short containing the gap between the address of this object and the address of the preceding object (See the Short object record description)

  • A word containing the address of the class of this object.
  • The array of references (See the Short object record description).

3. Long object record

This record format is used when there are more than 7 references, or if there are extra flags or a hash code. The following format is used:

The 1 byte tag, containing the value 4.

A byte containing flags, consisting of the following bits:

Bit number Value or description
1 and 2 Indicates whether the gap is a , , or format
3 and 4 Indicates the size of each reference (0= , 1= , 2= , 3= )
5 and 6 Unused
7 Indicates if the object was hashed and moved. If this bit is set, the record includes the hash code
8 Indicates if the object was hashed

A byte , short , int , or long containing the gap between the address of this object and the address of the preceding object (See the Short object record description).

  • If all objects are hashed, a short containing the hash code. Otherwise, an optional int containing the hash code if the hashed and moved bit is set in the record flag byte.
  • An int containing the length of the array of references.

Array records

PHD arrays can be primitive arrays or object arrays, as described in the sections that follow.

1. Primitive array record

The following information is contained in an array record:

Bit number Value or description
1 and 2 0
3 Set (0x20)
4, 5, and 6 Contains the array type ( 0=bool, 1=char, 2=float, 3=double, 4= , 5= , 6= , and 7= )
7 and 8 Indicates the length of the array size and the length of the gap (0= , 1= , 2= , 3= )

byte , short , int or long containing the gap between the address of this object and the address of the preceding object (See the Short object record description).

  • byte , short , int or long containing the array length.
  • An unsigned int containing the size of the instance of the array on the heap, including header and padding. The size is measured in 32-bit words, which you can multiply by four to obtain the size in bytes. This format allows encoding of lengths up to 16GB in an unsigned int .

2. Long primitive array record

This type of record is used when a primitive array has been hashed.

The 1 byte tag with a value of 7.

A byte containing the following flags:

Bit number Value or description
1, 2, and 3 Contains the array type ( 0=bool, 1=char, 2=float, 3=double, 4= , 5= , 6= , and 7= )
4 Indicates the length of the array size and the length of the gap (0= , 1= ).
5 and 6 Unused
7 Indicates if the object was hashed and moved. If this bit is set, the record includes the hash code.
8 Indicates if the object was hashed

a byte or word containing the gap between the address of this object and the address of the preceding object (See the Short object record description).

  • a byte or word containing the array length.

3. Object array record

The following format applies:

The 1 byte tag with a value of 8.

Bit number Value or description
1 and 2 Indicates whether the gap is , , or .
3 and 4 Indicates the size of each reference (0= , 1= , 2= , 3= )
5 and 6 Unused
7 Indicates if the object was hashed and moved. If this bit is set, the record includes the hash code.
8 Indicates if the object was hashed

A byte , short , int or long containing the gap between the address of this object and the address of the preceding object (See the Short object record format description).

  • A word containing the address of the class of the objects in the array. Object array records do not update the class cache.
  • If all objects are hashed, a short containing the hash code. If the hashed and moved bit is set in the records flag, this field contains an int .
  • An final int value is shown at the end. This int contains the true array length, shown as a number of array elements. The true array length might differ from the length of the array of references because null references are excluded.

Class records

The PHD class record encodes a class object and contains the following format:

The 1 byte tag, containing the value 6.

Bit number Value or description
1 and 2 Indicates whether the gap is byte, , or
3 and 4 Indicates the size of each static reference (0= , 1= , 2= , 3= )
5 Indicates if the object was hashed

A byte, short , int or long containing the gap between the address of this class and the address of the preceding object (See the Short object record description).

  • An int containing the instance size.
  • A word containing the address of the superclass.
  • A UTF string containing the name of this class.
  • An int containing the number of static references.
  • The array of static references (See the Short object record description).

Classic Heap Dump format

Classic heap dumps are produced in ascii text on all platforms except z/OS, which are encoded in EBCDIC. The dump is divided into the following sections:

Header record

A single string containing information about the runtime environment, platform, and build levels, similar to the following example:

A record of each object instance in the heap with the following format:

The following object types ( object type ) might be shown:

  • class name (including package name)
  • class array type
  • primitive array type

These types are abbreviated in the record. To determine the type, see the Java VM Type Signature table .

Any references found are also listed, excluding references to an object's class or NULL references.

The following example shows an object instance (16 bytes in length) of type java/lang/String , with a reference to a char array:

The object instance (length 32 bytes) of type char array, as referenced from the java/lang/String , is shown in the following example:

The following example shows an object instance (24 bytes in length) of type array of java/lang/String :

A record of each class in the following format:

The following class types ( <class type> ) might be shown:

  • primitive array types

Any references found in the class block are also listed, excluding NULL references.

The following example shows a class object (80 bytes in length) for java/util/Date , with heap references:

Trailer record 1

A single record containing record counts, in decimal.

For example:

Trailer record 2

A single record containing totals, in decimal.

The values in the example reflect the following counts:

  • 7147 total objects
  • 22040 total references
  • (12379) total NULL references as a proportion of the total references count

Java VM Type Signatures

The following table shows the abbreviations used for different Java types in the heap dump records:

Java VM Type Signature Java Type
  • DTFJ interface

.PHD File Extension

  • 1. PhotoDirector Project File
  • 2. Portable Heap Dump File

PhotoDirector Project File

Developer CyberLink
Popularity 3.9  |  8 Votes
 

What is a PHD file?

Photo project created by PhotoDirector, a program used for editing digital photos; supports imported photos from many different camera RAW formats, including .DNG , .CR2 , .SRF , and many others; contains the library of imported images and stores any user edits; used as the native project save format and is saved with other data files and folders that contain the digital photo data.

Programs that open PHD files

Portable heap dump file.

Developer IBM
Popularity 4.0  |  3 Votes
 

Data file created in the Portable Heap Dump format, which is used to create Java heap dump files using IBM's version of the Java Virtual Machine (JVM); may contains a record of all Java heap objects; used for debugging application errors such as memory leaks.

More Information

Portable heap dumps can be generated by setting the following environment variable parameters: IBM_HEAP_DUMP=true and IBM_HEAPDUMP=true . PHD files may be saved in a text or a binary format. However, the binary format is much smaller in file size. To specify the text format, set the IBM_JAVA_HEAPDUMP_TEXT=true environment variable.

NOTE: Portable heap dumps are typically generated by killing a running Java application. Therefore, to create a PHD file, start your Java program with the required environment variables set and then kill it.

Programs that open or reference PHD files

Verified by fileinfo.com.

The FileInfo.com team has independently researched all file formats and software programs listed on this page. Our goal is 100% accuracy and we only publish information about file types that we have verified.

If you would like to suggest any additions or updates to this page, please let us know .

PAGE CONTENTS

Acquiring Heap Dumps

HPROF Binary Heap Dumps

Get Heap Dump on an OutOfMemoryError

One can get a HPROF binary heap dump on an OutOfMemoryError for Sun JVM (1.4.2_12 or higher and 1.5.0_07 or higher), Oracle JVMs, OpenJDK JVMs, HP-UX JVM (1.4.2_11 or higher) and SAP JVM (since 1.5.0) by setting the following JVM parameter:

-XX:+HeapDumpOnOutOfMemoryError

By default, the heap dump is written to the work directory. This can be controlled using the following option to specify a directory or file name. -XX:HeapDumpPath=/dumpPath/ -XX:HeapDumpPath=./java_pidPIDNNN.hprof

Interactively Trigger a Heap Dump

To get heap dump on demand one can add the following parameter to the JVM and press CTRL + BREAK in the preferred moment:

-XX:+HeapDumpOnCtrlBreak

This is only available between Java 1.4.2 and Java 6.

HPROF agent

To use the HPROF agent to generate a dump on the end of execution, or on SIGQUIT signal use the following JVM parameter:

-agentlib:hprof=heap=dump,format=b

This was removed in Java 9 and later.

Alternatively, other tools can be used to acquire a heap dump:

  • jmap -dump:format=b,file= <filename.hprof> <pid>
  • jcmd <pid> GC.heap_dump <filename.hprof>
  • JConsole (see sample usage in Basic Tutorial )
  • JVisualVM This was available with a Java 7 or Java 8, but is now available from a separate download site .
  • Memory Analyzer (see bottom of page )

System Dumps and Heap Dumps from IBM Virtual Machines

  • All known formats
  • HPROF binary heap dumps
  • IBM 1.4.2 SDFF 1
  • IBM Javadumps
  • IBM SDK for Java (J9) system dumps
  • IBM SDK for Java Portable Heap Dumps
Dump Format Approximate size on disk Objects, classes, and classloaders Thread details Field names Field and array references Primitive field contents Primitive array contents Accurate garbage-collection roots Native memory and threads Compression
HPROF Java heap size Y Y Y Y Y Y Y N with Gzip to around 40 percent of original size
IBM system dumps Java heap size + 30 percent Y Y Y Y Y Y Y Y with Zip or jextract to around 20 percent of original size
IBM PHD 20 percent of Java heap size Y with Javacore N Y N N N N with Gzip to around 20 percent of original size

Older versions of IBM Java (e.g. < 5.0SR12, < 6.0SR9) require running jextract on the operating system core dump which produced a zip file that contained the core dump, XML or SDFF file, and shared libraries. The IBM DTFJ feature still supports reading these jextracted zips although IBM DTFJ feature version 1.12.29003.201808011034 and later cannot read IBM Java 1.4.2 SDFF files, so MAT cannot read them either. Dumps from newer versions of IBM Java do not require jextract for use in MAT since DTFJ is able to directly read each supported operating system's core dump format. Simply ensure that the operating system core dump file ends with the .dmp suffix for visibility in the MAT Open Heap Dump selection. It is also common to zip core dumps because they are so large and compress very well. If a core dump is compressed with .zip , the IBM DTFJ feature in MAT is able to decompress the ZIP file and read the core from inside (just like a jextracted zip). The only significant downsides to system dumps over PHDs is that they are much larger, they usually take longer to produce, they may be useless if they are manually taken in the middle of an exclusive event that manipulates the underlying Java heap such as a garbage collection, and they sometimes require operating system configuration ( Linux , AIX ) to ensure non-truncation.

In recent versions of IBM Java (> 6.0.1), by default, when an OutOfMemoryError is thrown, IBM Java produces a system dump, PHD, javacore, and Snap file on the first occurrence for that process (although often the core dump is suppressed by the default 0 core ulimit on operating systems such as Linux). For the next three occurrences, it produces only a PHD, javacore, and Snap. If you only plan to use system dumps, and you've configured your operating system correctly as per the links above (particularly core and file ulimits), then you may disable PHD generation with -Xdump:heap:none . For versions of IBM Java older than 6.0.1, you may switch from PHDs to system dumps using -Xdump:system:events=systhrow,filter=java/lang/OutOfMemoryError,request=exclusive+prepwalk -Xdump:heap:none

In addition to an OutOfMemoryError, system dumps may be produced using operating system tools (e.g. gcore in gdb for Linux, gencore for AIX, Task Manager for Windows, SVCDUMP for z/OS, etc.), using the IBM and OpenJ9 Java APIs , using the various options of -Xdump , using Java Surgery , and more.

Versions of IBM Java older than IBM JDK 1.4.2 SR12, 5.0 SR8a and 6.0 SR2 are known to produce inaccurate GC root information.

Acquire Heap Dump from Memory Analyzer

If the Java process from which the heap dump is to be acquired is on the same machine as the Memory Analyzer, it is possible to acquire a heap dump directly from the Memory Analyzer. Dumps acquired this way are directly parsed and opened in the tool.

Acquiring the heap dump is VM specific. Memory Analyzer comes with several so called heap dump providers - for OpenJDK, Oracle and Sun based VMs (needs a OpenJDK, Oracle or Sun JDK with jmap) and for IBM VMs (needs an IBM JDK or JRE). Also extension points are provided for adopters to plug-in their own heap dump providers.

To trigger a heap dump from Memory Analyzer open the File > Acquire Heap Dump... menu item. Try Acquire Heap Dump now.

Depending on the concrete execution environment the pre-installed heap dump providers may work with their default settings and in this case a list of running Java processes should appear: To make selection easier, the order of the Java processes can be altered by clicking on the column titles for pid or Heap Dump Provider .

Select a process to be dumped

One can now select from which process a heap dump should be acquired, provide a preferred location for the heap dump and press Finish to acquire the dump. Some of the heap dump providers may allow (or require) additional parameters (e.g. type of the heap dump) to be set. This can be done by using Next button to get to the Heap Dump Provider Arguments page of the wizard.

Configuring the Heap Dump Providers

If the process list is empty try to configure the available heap dump providers. To do this press Configure... , select a matching provider from the list and click on it. You can see then what are the required settings and specify them. Next will then apply any changed settings, and refresh the JVM list if any settings have been changed. Prev will return to the current JVM list without applying any changed settings. To then apply the changed settings reenter and exit the Configure Heap Dump Providers... page as follows: Configure... > Next

If a process is selected before pressing Configure... then the corresponding dump provider will be selected on entering the Configure Heap Dump Providers... page.

If a path to a jcmd executable is provided then this command will be used to generate a list of running JVMs and to generate the dumps.

System dumps can be processed using jextract which compressed the dump and also adds extra system information so that the dump could be moved to another machine.

Portable Heap Dump (PHD) files generated with the Heap option can be compressed using the gzip compressor to reduce the file size.

HPROF files can be compressed using the Gzip compressor to reduce the file size. A compressed file may take longer to parse in Memory Analyzer, and running queries and reports and reading fields from objects may take longer.

Multiple snapshots in one heap dump

Memory Analyzer 1.2 and earlier handled this situation by choosing the first heap dump snapshot found unless another was selected via an environment variable or MAT DTFJ configuration option.

Memory Analyzer 1.3 handles this situation by detecting the multiple dumps, then presenting a dialog for the user to select the required snapshot.

Choose a snapshot to be analyzed

The index files generated have a component in the file name from the snapshot identifier, so the index files from each snapshot can be distinguished. This means that multiple snapshots from one heap dump file can be examined in Memory Analyzer simultaneously. The heap dump history for the file remembers the last snapshot selected for that file, though when the snapshot is reopened via the history the index file is also shown in the history. To open another snapshot in the dump, close the first snapshot, then reopen the heap dump file using the File menu and another snapshot can be chosen to be parsed. The first snapshot can then be reopened using the index file in the history, and both snapshots can be viewed at once.

The following table shows the availability of VM options and tools on the various platforms:

Vendor Release VM Parameter JVM Tools SAP Tool Attach MAT
    On out of memory On Ctrl+Break Agent JMap JCmd JConsole JVMMon API acquire heap dump
Sun, HP 1.4.2_12 Yes Yes Yes No No No No   No
1.5.0_07 Yes Yes (Since 1.5.0_15) Yes Yes (Only Solaris and Linux) No No No com.sun.tools.attach Yes (Only Solaris and Linux)
1.6.0_00 Yes No Yes Yes No Yes No com.sun.tools.attach Yes
Oracle, OpenJDK, HP 1.7.0 Yes No Yes Yes Yes Yes   com.sun.tools.attach Yes
Oracle, OpenJDK, Eclipse Temurin, HP, Amazon Corretto 1.8.0 Yes No Yes Yes Yes Yes   com.sun.tools.attach Yes
11 Yes No No Yes Yes Yes   com.sun.tools.attach Yes
17 Yes No No Yes Yes Yes   com.sun.tools.attach Yes
21 Yes No No Yes Yes Yes   com.sun.tools.attach Yes
SAP Any 1.5.0 Yes Yes Yes Yes (Only Solaris and Linux) No No Yes    
IBM 1.4.2 SR12 Yes Yes No No No No No   No
1.5.0 SR8a Yes Yes No No No No No com.ibm.tools.attach No
1.6.0 SR2 Yes Yes No No No No No com.ibm.tools.attach No
1.6.0 SR6 Yes Yes No No No No No com.ibm.tools.attach Yes
1.7.0 Yes Yes No No No No No com.ibm.tools.attach Yes
1.8.0 Yes Yes No No No No No com.ibm.tools.attach Yes
1.8.0 SR5 Yes Yes No No No Yes (PHD only?) No com.sun.tools.attach Yes
OpenJ9, IBM Semeru 1.8.0 Yes Yes No No Yes Yes (PHD only) No com.sun.tools.attach Yes
11 Yes Yes No No Yes Yes No com.sun.tools.attach Yes
17 Yes Yes No No Yes Yes No com.sun.tools.attach Yes

Create a Heap Dump from a Native Executable

You can create a heap dump of a running executable to monitor its execution. Just like any other Java heap dump, it can be opened with the VisualVM tool.

To enable heap dump support, a native executable must be built with the --enable-monitoring=heapdump option. A heap dump can then be created in the following ways:

  • Create a heap dump with VisualVM.
  • The command-line option -XX:+HeapDumpOnOutOfMemoryError can be used to create a heap dump when the native executable runs out of Java heap memory.
  • Dump the initial heap of a native executable using the -XX:+DumpHeapAndExit command-line option.
  • Create a heap dump by sending a SIGUSR1 signal to the application at runtime.
  • Create a heap dump programmatically using the org.graalvm.nativeimage.VMRuntime#dumpHeap API.

All approaches are described below.

Note: By default, a heap dump is created in the current working directory. The -XX:HeapDumpPath option can be used to specify an alternative filename or directory. For example: ./helloworld -XX:HeapDumpPath=$HOME/helloworld.hprof
Also note: It is not possible to create a heap dump on the Microsoft Windows platform.

Create a Heap Dump with VisualVM

A convenient way to create a heap dump is to use VisualVM . For this, you need to add jvmstat to the --enable-monitoring option (for example, --enable-monitoring=heapdump,jvmstat ). This will allow VisualVM to pick up and list running Native Image processes. You can then request a heap dump in the same way you can request one when your application runs on the JVM (for example, right-click on the process, then select Heap Dump ).

Create a Heap Dump on OutOfMemoryError

Start the application with the option -XX:+HeapDumpOnOutOfMemoryError to get a heap dump when the native executable throws an OutOfMemoryError because it ran out of Java heap memory. The heap dump is created in a file named svm-heapdump-<PID>-OOME.hprof . For example:

Dump the Initial Heap of a Native Executable

Use the -XX:+DumpHeapAndExit command-line option to dump the initial heap of a native executable. This can be useful to identify which objects the Native Image build process allocated to the executable’s heap. For a HelloWorld example, use the option as follows:

Create a Heap Dump with SIGUSR1 (Linux/macOS only)

Note: This requires the Signal API, which is enabled by default except when building shared libraries.

The following example is a simple multithreaded Java application that runs for 60 seconds. This provides you with enough time to send it a SIGUSR1 signal. The application will handle the signal and create a heap dump in the application’s working directory. The heap dump will contain the Collection of Person s referenced by the static variable CROWD .

Follow these steps to build a native executable that will produce a heap dump when it receives a SIGUSR1 signal.

Prerequisite

Make sure you have installed a GraalVM JDK. The easiest way to get started is with SDKMAN! . For other installation options, visit the Downloads section .

  • Save the following code in a file named SVMHeapDump.java : import java.nio.charset.Charset; import java.text.DateFormat; import java.util.ArrayList; import java.util.Collection; import java.util.Date; import java.util.Random; import org.graalvm.nativeimage.ProcessProperties; public class SVMHeapDump extends Thread { static Collection<Person> CROWD = new ArrayList<>(); static DateFormat DATE_FORMATTER = DateFormat.getDateTimeInstance(); static int i = 0; static int runs = 60; static int sleepTime = 1000; @Override public void run() { System.out.println(DATE_FORMATTER.format(new Date()) + ": Thread started, it will run for " + runs + " seconds"); while (i < runs) { // Add a new person to the collection CROWD.add(new Person()); System.out.println("Sleeping for " + (runs - i) + " seconds."); try { Thread.sleep(sleepTime); } catch (InterruptedException ie) { System.out.println("Sleep interrupted."); } i++; } } /** * @param args the command line arguments */ public static void main(String[] args) throws InterruptedException { // Add objects to the heap for (int i = 0; i < 1000; i++) { CROWD.add(new Person()); } long pid = ProcessProperties.getProcessID(); StringBuffer sb1 = new StringBuffer(100); sb1.append(DATE_FORMATTER.format(new Date())); sb1.append(": Hello GraalVM native image developer! \n"); sb1.append("The PID of this process is: " + pid + "\n"); sb1.append("Send it a signal: "); sb1.append("'kill -SIGUSR1 " + pid + "' \n"); sb1.append("to dump the heap into the working directory.\n"); sb1.append("Starting thread!"); System.out.println(sb1); SVMHeapDump t = new SVMHeapDump(); t.start(); while (t.isAlive()) { t.join(0); } sb1 = new StringBuffer(100); sb1.append(DATE_FORMATTER.format(new Date())); sb1.append(": Thread finished after: "); sb1.append(i); sb1.append(" iterations."); System.out.println(sb1); } } class Person { private static Random R = new Random(); private String name; private int age; public Person() { byte[] array = new byte[7]; R.nextBytes(array); name = new String(array, Charset.forName("UTF-8")); age = R.nextInt(100); } }

Build a native executable:

Compile SVMHeapDump.java as follows:

Build a native executable using the --enable-monitoring=heapdump command-line option. (This causes the resulting native executable to produce a heap dump when it receives a SIGUSR1 signal.)

(The native-image builder creates a native executable from the file SVMHeapDump.class . When the command completes, the native executable svmheapdump is created in the current directory.)

Run the application, send it a signal, and check the heap dump:

Run the application:

Make a note of the PID and open a second terminal. Use the PID to send a signal to the application. For example, if the PID is 57509 :

The heap dump will be created in the working directory while the application continues to run. The heap dump can be opened with the VisualVM tool, as illustrated below.

Native Image Heap Dump View in VisualVM

Create a Heap Dump from within a Native Executable

The following example shows how to create a heap dump from a running native executable using VMRuntime.dumpHeap() if some condition is met. The condition to create a heap dump is provided as an option on the command line.

Save the code below in a file named SVMHeapDumpAPI.java .

As in the earlier example, the application creates a Collection of Person s referenced by the static variable CROWD . It then checks the command line to see if heap dump has to be created, and then in method createHeapDump() creates the heap dump.

Build a native executable.

Compile SVMHeapDumpAPI.java and build a native executable:

When the command completes, the svmheapdumpapi native executable is created in the current directory.

Run the application and check the heap dump

Now you can run your native executable and create a heap dump from it with output similar to the following:

The resulting heap dump can be then opened with the VisualVM tool like any other Java heap dump, as illustrated below.

Native Image Heap Dump View in VisualVM

Related Documentation

  • Debugging and Diagnostics

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Journal Proposal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

minerals-logo

Article Menu

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Hydrometallurgical processing of low-grade sulfide ore and mine waste in the arctic regions: perspectives and challenges.

phd file heap dump

1. Introduction

2. specifics of metal heap leaching in severe climatic conditions.

  • construction of ore stacks in cells;
  • use of drip emitters irrigation system consisting of pressure emitter trickles of labyrinth type;
  • use of snow cover as a natural thermal insulation layer over a dump;
  • freezing of “ice glaze” on a heap surface;
  • covering a heap with thermally insulating materials during a cold season (polymer geo-textiles, polyethlene films with heated air supply underneath a covering mining material, with a layer thickness of up to 1 m);
  • heating of process solutions.

3. Geotechnological Methods of Ore Processing at the Udokan Mine

4. heap bioleaching of polymetallic nickel ore at the talvivaara deposit in sotkamo, finland, 5. copper-nickel ores and technogenic waste in murmansk region, 5.1. perspectives for biological leaching of sulfide copper-nickel ores from the allarechensky deposit dumps, 5.2. copper-nickel ore tailings, 5.3. low-grade copper-nickel ores of the monchepluton deposits, 5.4. percolation bioleaching of non-ferrous metals from low-grade copper-nickel ore, 6. conclusions, author contributions, conflicts of interest.

  • Chanturiya, V.A.; Kozlov, A.P. Development of physical-chemical basis and working out of innovation technologies of deep processing of anthropogenic mineral raw materials. Gorn. Zhurnal (Min. J.) 2014 , 7 , 79–84. (In Russian) [ Google Scholar ]
  • Halezov, B.D. Copper and Copper–Zinc Ore Heap Leaching ; Ural Branch of Russian Academy of Sciences: Ekaterinburg, Russia, 2013; p. 360. (In Russian) [ Google Scholar ]
  • Petersen, J. Heap leaching as a key technology for recovery of values from low-grade ores—A brief overview. Hydrometallurgy 2016 , 165 , 206–212. [ Google Scholar ] [ CrossRef ]
  • Karavaiko, G.I.; Rossi, D.; Agate, A.; Grudev, S.; Avakyan, Z.A. Biogeotechnology of Metals: Practical Guide ; Tsentr Mezhdunarodnykh Proektov GKNT: Moscow, Russia, 1989; p. 375. (In Russian) [ Google Scholar ]
  • Kondrat’eva, T.F.; Bulaev, A.G.; Muravyov, M.I. Microorganisms in Biotechnologies of Sulfide Ores Processing ; Nauka: Moscow, Russia, 2015; p. 212. (In Russian) [ Google Scholar ]
  • Watling, H.R. Review of biohydrometallurgical metals extraction from polymetallic mineral resources. Minerals 2015 , 5 , 1–60. [ Google Scholar ] [ CrossRef ]
  • McDonough, W.; Braungart, M.; Anastas, P.T.; Zimmerman, J.B. Applying the principles of GREEN engineering to cradle-to-cradle design. Environ. Sci. Technol. 2003 , 37 , 434A–441A. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Pakhomova, N.V.; Richter, K.K.; Vetrova, M.A. Transition to circular economy and closed-loop supply chains as driver of sustainable development. St Petersbg. Univ. J. Econ. Stud. 2017 , 33 , 244–268. [ Google Scholar ] [ CrossRef ]
  • Lèbre, É.; Corder, G.D.; Golev, A. Sustainable practices in the management of mining waste: A focus on the mineral resource. Miner. Eng. 2017 , 107 , 34–42. [ Google Scholar ] [ CrossRef ]
  • Masloboev, V.A.; Seleznev, S.G.; Makarov, D.V.; Svetlov, A.V. Assessment of eco-hazard of copper-nickel ore mining and processing waste. J. Min. Sci. 2014 , 50 , 559–572. [ Google Scholar ] [ CrossRef ]
  • Chanturiya, V.A.; Makarov, V.N.; Makarov, D.V.; Vasil’eva, T.N.; Pavlov, V.V.; Trofimenko, T.A. Influence exerted by storage conditions on the change in properties of copper-nickel technologies products. J. Min. Sci. 2002 , 38 , 612–617. [ Google Scholar ] [ CrossRef ]
  • Chanturiya, V.A.; Makarov, V.N.; Makarov, D.V.; Vasil’eva, T.N. Forms of nickel in storage tailings of copper–nickel ores. Dokl. Earth Sci. 2004 , 399 , 1150–1152. [ Google Scholar ]
  • Seleznev, S.G.; Boltyrov, V.B. Ecology of Mining-Generated Object “Allarechensky Deposit Dumps” (Pechenga District, Murmansk Region). News High. Inst. Min. J. 2013 , 7 , 73–79. (In Russian) [ Google Scholar ]
  • Adamov, E.V.; Panin, V.V. Biotechnology of Metals: Course of Lectures ; MISiS: Moscow, Russia, 2008; p. 153. (In Russian) [ Google Scholar ]
  • Kashuba, S.G.; Leskov, M.I. Heap leaching in the Russian practice—An overview of the experience and analysis of prospects. Zoloto I Tehnologii 2014 , 1 , 10–14. (In Russian) [ Google Scholar ]
  • Watling, H.R. The bioleaching of sulphide minerals with emphasis on copper sulphides—A review. Hydrometallurgy 2006 , 84 , 81–100. [ Google Scholar ] [ CrossRef ]
  • Dement’ev, V.E.; Druzhinina, G.E.; Gudkov, S.S. Heap Leaching of Gold and Silver ; Irgiredmet: Irkutsk, Russia, 2004; p. 252. (In Russian) [ Google Scholar ]
  • Sovmen, V.K.; Gus’kov, V.N.; Belyi, A.V.; Kuzina, Z.P.; Drozdov, S.V.; Savushkina, S.I.; Maiorov, A.M.; Zakraevskii, M.P. Processing of Gold-Bearing Ores with the Use of Bacterial Oxidation in the Conditions of Far North ; Nauka: Novosibirsk, Russia, 2007; p. 144. (In Russian) [ Google Scholar ]
  • Golik, V.I.; Zaalishvili, V.B.; Razorenov, Yu.I. Uranium leaching experience. Min. Inf. Anal. Bull. 2014 , 7 , 97–103. [ Google Scholar ]
  • Anjum, F.; Shahid, M.; Akcil, A. Biohydrometallurgy techniques of low grade ores: A review on black shale. Hydrometallurgy 2012 , 117–118 , 1–12. [ Google Scholar ] [ CrossRef ]
  • Watling, H.R. The bioleaching of nickel sulphides. Hydrometallurgy 2008 , 91 , 70–88. [ Google Scholar ] [ CrossRef ]
  • Watling, H.R.; Elliot, A.D.; Maley, M.; van Bronswijk, W.; Hunter, C. Leaching of a low-grade, copper-nickel sulfide ore. 1. Key parameters impacting on Cu recovery during column bioleaching. Hydrometallurgy 2009 , 97 , 204–212. [ Google Scholar ] [ CrossRef ]
  • Maley, M.; van Bronswijk, W.; Watling, H.R. Leaching of a low-grade, copper-nickel sulfide ore 2. Impact of aeration and pH on Cu recovery during abiotic leaching. Hydrometallurgy 2009 , 98 , 66–72. [ Google Scholar ] [ CrossRef ]
  • Maley, M.; van Bronswijk, W.; Watling, H.R. Leaching of a low-grade, copper-nickel sulfide ore 3. Interactions of Cu with selected sulfide minerals. Hydrometallurgy 2009 , 98 , 73–80. [ Google Scholar ] [ CrossRef ]
  • Halinen, A.-K.; Rahunen, N.; Kaksonen, A.H.; Puhakka, J.A. Heap bioleaching of a complex sulfide ore: Part I. Effect of temperature on base metal extraction and bacterial compositions. Hydrometallurgy 2009 , 98 , 92–100. [ Google Scholar ] [ CrossRef ]
  • Halinen, A.-K.; Rahunen, N.; Kaksonen, A.H.; Puhakka, J.A. Heap bioleaching of a complex sulfide ore: Part II. Effect of temperature on base metal extraction and bacterial compositions. Hydrometallurgy 2009 , 98 , 101–107. [ Google Scholar ] [ CrossRef ]
  • Qin, W.; Zhen, S.; Yan, Z.; Campbell, M.; Wang, J.; Liu, K.; Zhang, Y. Heap bioleaching of a low-grade nickel-bearing sulfide ore containing high levels of magnesium as olivine, chlorite and antigorite. Hydrometallurgy 2009 , 98 , 58–65. [ Google Scholar ] [ CrossRef ]
  • Zhen, S.; Yan, Z.; Zhang, Y.; Wang, J.; Campbell, M.; Qin, W. Column bioleaching of a low grade nickel-bearing sulfide ore containing high magnesium as olivine, chlorite and antigorite. Hydrometallurgy 2009 , 96 , 337–341. [ Google Scholar ] [ CrossRef ]
  • Yang, C.; Qin, W.; Lai, S.; Wang, J.; Zhang, Y.; Jiao, F.; Ren, L.; Zhuang, T.; Chang, Z. Bioleaching of a low grade nickel-copper-cobalt sulfide ore. Hydrometallurgy 2011 , 106 , 32–37. [ Google Scholar ] [ CrossRef ]
  • Bhatti, T.M.; Bigham, J.M.; Vuorinen, A.; Tuovinen, O.H. Chemical and bacterial leaching of metals from black schist sulfide minerals in shake flasks. Int. J. Miner. Process. 2012 , 110–111 , 25–29. [ Google Scholar ] [ CrossRef ]
  • Mandziak, T.; Pattinson, D. Experience-based approach to successful heap leach pad design. Min. World 2015 , 12 , 28–35. [ Google Scholar ]
  • Petersen, J.; Dixon, D.G. Modelling zinc heap bioleaching. Hydrometallurgy 2007 , 85 , 127–143. [ Google Scholar ] [ CrossRef ]
  • Mellado, M.E.; Galvez, E.D.; Cisternas, L.A.; Ordonez, J. A posteriori analysis of analytical models for heap leaching. Miner. Metall. Process. 2012 , 29 , 103–112. [ Google Scholar ]
  • Ding, D.; Song, J.; Ye, Y.; Li, G.; Fu, H.; Hu, N.; Wang, Y. A kinetic model for heap leaching of uranium ore considering variation of model parameters with depth of heap. J. Radioanal. Nucl. Chem. 2013 , 298 , 1477–1482. [ Google Scholar ] [ CrossRef ]
  • McBride, D.; Gebhardt, J.E.; Croft, T.N.; Cross, M. Modeling the hydrodynamics of heap leaching in sub-zero temperatures. Miner. Eng. 2016 , 90 , 77–88. [ Google Scholar ] [ CrossRef ]
  • Lodeyschikov, V.V. Processing of nickel ores by heap leaching bacteria. The experience of the Finnish company Talvivaara. Zolotodobyicha. 2009 , 132 , 12–14. (In Russian) [ Google Scholar ]
  • Sinha, K.P.; Smith, M.E. Cold climate heap leaching. In Proceedings of the 3rd International Conference on Heap Leach Solutions, Reno, NV, USA, 12–16 September 2015. [ Google Scholar ]
  • Smith, K.E. Cold weather gold heap leaching operational methods. JOM 1997 , 49 , 20–23. [ Google Scholar ] [ CrossRef ]
  • Shesternev, D.M.; Myazin, V.P.; Bayanov, A.E. Heap gold leaching in permafrost zone in Russia. Gornyi Zhurnal (Min. J.) 2015 , 1 , 49–54. (In Russian) [ Google Scholar ]
  • Ptitsyn, A.B. Geochemical Fundamentals of Metal Geotechnology in Permafrost Conditions ; Nauka: Novosibirsk, Russia, 1992; p. 120. (In Russian) [ Google Scholar ]
  • Ptitsyn, A.B.; Sysoeva, E.I. Cryogenic mechanism of the Udokan oxidizing area formation. Russ. Geol. Geophy. 1995 , 36 , 90–97. [ Google Scholar ]
  • Abramova, V.A.; Ptitsyn, A.B.; Markovich, T.I.; Pavlyukova, V.A.; Epova, E.S. Geochemistry of Oxidation in Permafrost Zones ; Nauka: Novosibirsk, Russia, 2009; p. 88. (In Russian) [ Google Scholar ]
  • Yurgenson, G.A. The cryomineralogenesis of minerals in the technological massifs. In Proceedings of Mineralogy of technogenesis—2009 ; Institute of Mineralogy, Ural Branch of RAS: Miass, Russia, 2009; pp. 61–75. (In Russian) [ Google Scholar ]
  • Markovich, T.I. Processes of Heavy Metal Sulphides Oxidation with Nitrous Acid. Ph.D. Thesis, Trofimuk Institute of Petroleum Geology and Geophysics, Siberian Branch of the Russian Academy of Sciences (IPGG SB RAS), Novosibirsk, Russia, February 2000. (In Russian). [ Google Scholar ]
  • Ptitsyn, A.B.; Markovich, T.I.; Pavlyukova, V.A.; Epova, E.S. Modeling cryogeochemical processes in the oxidation zone of sulfide deposits with the participation of oxygen-bearing nitrogen compounds. Geochem. Int. 2007 , 45 , 726–731. [ Google Scholar ] [ CrossRef ]
  • Abramova, V.A.; Parshin, A.V.; Budyak, A.E. Physical and chemical modeling of the influence of nitrogen compounds on the course of geochemical processes in the cryolithozone. Kriosfera Zemli. 2015 , 9 , 32–37. (In Russian) [ Google Scholar ]
  • Abramova, V.A.; Parshin, A.V.; Budyak, A.E.; Ptitsyn, A.B. Geoinformation modeling of sulfide frost weathering in the area of Udokan deposit. J. Min. Sci. 2017 , 53 , 501–597. [ Google Scholar ] [ CrossRef ]
  • Khalezov, B.D. Problems of Udokansky field ores processing. Min. Inf. Anal. Bull. 2014 , 8 , 103–108. (In Russian) [ Google Scholar ]
  • Riekkola-Vanhanen, M. Talvivaara black schist bioheapleaching demonstration plant. Adv. Mater. Res. 2007 , 20–21 , 30–33. [ Google Scholar ] [ CrossRef ]
  • Puhakka, J.A.; Kaksonen, A.H.; Riekkola-Vanhanen, M. Heap leaching of black schist. In Biomining ; Rawlings, D.E., Johnson, D.B., Eds.; Springer-Verlag: Berlin, Germany, 2007; pp. 139–151. [ Google Scholar ]
  • Halinen, A.K.; Rahunen, N.; Määttä, K.; Kaksonen, A.H.; Riekkola-Vanhanen, M.; Puhakka, J. Microbial community of Talvivaara demonstration bioheap. Adv. Mater. Res. 2007 , 20–21 , 579. [ Google Scholar ] [ CrossRef ]
  • Riekkola-Vanhanen, M. Talvivaara Sotkamo mine—Bioleaching of a polymetallic nickel ore in subarctic climate. Nova Biotechnol. 2010 , 1011 , 7–14. [ Google Scholar ]
  • Riekkola-Vanhanen, M.; Palmu, L. Talvivaara Nickel Mine—From a project to a mine and beyond. In Proceedings of Symposium Ni-Co 2013 ; Battle, T., Moats, M., Cocalia, V., Oosterhof, H., Alam, S., Allanore, A., Jones, R., Stubina, N., Anderson, C., Wang, S., Eds.; Springer International Publishers: Cham, Switzerland, 2016; pp. 269–278. [ Google Scholar ]
  • Annual Report Talvivaara 2013 ; Talvivaara Sotkamo Ltd.: Tuhkakylä, Finland, 2013.
  • Golovko, E.A.; Rozental, A.K.; Sedel’nikov, V.A.; Suhodrev, V.M. Chemical and Bacterial Leaching of Copper-Nickel ores ; Nauka: Leningrad, Russia, 1978; p. 199. (In Russian) [ Google Scholar ]
  • Lyalikova, N.N. Bacteria role in the sulfide ores oxidizing of the copper-nickel deposits on the Kola Peninsula. Microbiology 1961 , 30 , 135–139. (In Russian) [ Google Scholar ]
  • Karavaiko, G.I.; Kuznetsov, S.I.; Golomzik, A.I. Role of Microorganisms in Leaching Metals from Ores ; Nauka: Moscow, Russia, 1972; p. 248. (In Russian) [ Google Scholar ]
  • Seleznev, S.G. Unconventional effective ways to enrich the sulfide copper-nickel ores on the example of Allarechenskiy technogenic deposit. News High. Inst. Min. J. 2011 , 8 , 118–125. (In Russian) [ Google Scholar ]
  • Seleznev, S.G. The Specificity and Development Problems for the Dumps of Sulfide Copper-Nickel Ores Allarechensky Deposit. Ph.D. Thesis, Ural State Mining University, Yekaterinburg, Russia, December 2013. (In Russian). [ Google Scholar ]
  • Svetlov, A.; Kravchenko, E.; Selivanova, E.; Seleznev, S.; Nesterov, D.; Makarov, D.; Masloboev, V. Perspectives for heap leaching of non-ferrous metals (Murmansk Region, Russia). J. Pol. Min. Eng. Soc. (Inzynieria Mineralna) 2015 , 36 , 231–236. [ Google Scholar ]
  • Svetlov, A.; Seleznev, S.; Makarov, D.; Selivanova, E.; Masloboev, V.; Nesterov, D. Heap leaching and perspectives of bioleaching technology for the processing of low-grade copper-nickel sulfide ores in the Murmansk region, Russia. J. Pol. Min. Eng. Soc. (Inzynieria Mineralna) 2017 , 39 , 51–59. [ Google Scholar ]
  • Svetlov, A.V.; Makarov, D.V.; Goryachev, A.A. Directions for intensification of leaching of non-ferrous metals on the example of low-grade copper-nickel ore deposits in the Murmansk region. In Proceedings of Mineralogy of Technogenesis—2017 ; Institute of Mineralogy, Ural Branch of RAS: Miass, Russia, 2017; pp. 154–162. (In Russian) [ Google Scholar ]
  • Fokina, N.V.; Yanishevskaya, E.S.; Svetlov, A.V.; Goryachev, A.A. Functional activity of microorganisms in mining and processing of copper-nickel ores in the Murmansk Region. Vestnik of MSTU (Murmansk State Technical University). 2018 , 21 , 109–116. (In Russian) [ Google Scholar ]

Click here to enlarge figure

Content, %
NickelCopper
Lake MoroshkovoyeNKTNyud TerrasaLake MoroshkovoyeNKTNyud Terrasa
0.5470.5670.4650.0360.3630.044
NickelCopper
Lake MoroshkovoyeNKTNyud TerrasaLake MoroshkovoyeNKTNyud Terrasa
1.87%0.97%0.32%0.13%0.24%0.06%

Share and Cite

Masloboev, V.A.; Seleznev, S.G.; Svetlov, A.V.; Makarov, D.V. Hydrometallurgical Processing of Low-Grade Sulfide Ore and Mine Waste in the Arctic Regions: Perspectives and Challenges. Minerals 2018 , 8 , 436. https://doi.org/10.3390/min8100436

Masloboev VA, Seleznev SG, Svetlov AV, Makarov DV. Hydrometallurgical Processing of Low-Grade Sulfide Ore and Mine Waste in the Arctic Regions: Perspectives and Challenges. Minerals . 2018; 8(10):436. https://doi.org/10.3390/min8100436

Masloboev, Vladimir A., Sergey G. Seleznev, Anton V. Svetlov, and Dmitriy V. Makarov. 2018. "Hydrometallurgical Processing of Low-Grade Sulfide Ore and Mine Waste in the Arctic Regions: Perspectives and Challenges" Minerals 8, no. 10: 436. https://doi.org/10.3390/min8100436

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Growing up in Russia’s biggest rubbish dump

Oscar-nominated filmmaker Hanna Polak on her Witness documentary about life in a notorious Russian landfill.

Something Better to Come

Editor’s note: The svalka was closed in 2007. A smaller dump was opened on the same landfill site where people continued to live.

At age 10, Yula had just one dream: to lead a normal life.

I first met Yula in 2000. She was one of the inhabitants of the “svalka” outside Moscow. This svalka, known simply by its Russian term for rubbish dump, was the largest landfill in Europe. It lay only 20km from the Kremlin, in Putin’s Russia, on the outskirts of the Russian capital – the city with the world’s third largest number of billionaires.

Yula is the subject of my film Something Better to Come , a Witness documentary currently airing on Al Jazeera, which follows her life over a 14-year period.

In 2000, I had been working with a volunteer group helping Moscow’s homeless children. Some of the young people that I met took me to the svalka for the first time. 

I didn’t have a permit to visit the rubbish dump (it would have been impossible to obtain one anyway), so they taught me how to enter undetected. 

Inside, I discovered a dystopian place where hazardous waste was dumped, heavy machinery constantly operated, and hundreds of wild dogs roamed around. Although no one “officially” lived there, it was home to an estimated 1,000 people: the most destitute of Russia’s underclass. This community was exploited by a local mafia, which ran illegal recycling centres. The landfill was like a country within a country: hidden from the external world, lawless, but with its own rules and codes.

Few organisations or people helped Moscow’s homeless children. Virtually no one came to the landfill to help its inhabitants. For the outside world, these people didn’t exist.

I wanted to help the landfill’s inhabitants – through medical assistance, for instance, which I have brought to them over the years – but also by chronicling their lives.

READ MORE: Is the Kremlin fuelling Russia’s HIV/Aids epidemic?

Yula, right, and a friend cook in the svalka [Courtesy of Hanna Polak]

The ‘waste mafia’

Yula’s parents had brought her to the landfill when she was 10, after their home was demolished. Her father was an alcoholic and her mother, Tania, had lost her job. Their neighbours told them about the dump, where food could be found and pennies earned.

Shortly after the family arrived, Yula’s father was detained in a prison for the homeless where he contracted tuberculosis. He died soon after his release. Tania became an alcoholic and Yula looked after her mother. Yula grew up quickly, in a world rife with poverty, despair and decay.

Although Yula was shy and didn’t speak often – not an easy protagonist to film – I was drawn to her. She was feisty, stubborn and fun; she was different from the other children.

Her home, this huge mountain of trash, almost 100 metres high and nearly two kilometres long on one side, was surrounded by a tall fence. Guards monitored it closely to keep intruders out.

The people who lived there worked as scavengers, sorting the rubbish which came from Moscow, collecting recyclable materials, such as bottles, metal, paper and plastic, for the “waste mafia”.

The workers earned just two rubles ($.03) per kilogramme of metal sorted, not the 78 rubles ($1.27) per kilo they could’ve earned outside. A bottle of fake vodka – a grain alcohol manufactured for industrial use – was the most common form of currency. The mafia paid the dump’s denizens with vodka.

This mafia posed a constant threat to the waste-pickers’ lives: if the dump’s inhabitants tried to work for a different trash overlord, they risked being beaten or killed. If they tried to remove goods from the landfill they risked execution. If they were killed, they disappeared into the rubbish for ever.

Bulldozers sometimes buried people alive. Women were frequently raped. Yet the police were never called; it was common knowledge that criminal investigations or ambulances weren’t welcome there. Corrupt police officers kept charity workers and ambulances out. On the rare occasions that the federal police did come, they burnt down huts and arrested people for living there illegally.

For most of the people who came to the svalka, this was their last stop before death. Most deaths occurred during the cold Russian winter, when storms swept across this mountain of waste. One winter, Yula counted almost 30 deaths in a week.

There, everyone was a doctor. People got sick, gave birth, and sometimes cut off their own limbs or toes when they froze in order to avoid gangrene.

READ MORE: The Kurils – A difficult life on the disputed islands

Yula inspects makeup [Courtesy of Hanna Polak]

Lack of hope

Although life was grim, it also often brought out the best in people. 

The landfill’s denizens generously shared their vodka with each other and opened their ramshackle sheds to shelter those who needed it.

Despite the misery that life had to offer, people strived for normality in the dump.

It was dangerous to film at the landfill. I stepped on nails and was lucky not to get sick. Once, I was able to fend off attacking dogs with pepper spray. I was caught and arrested numerous times by the dump’s security guards and the local police and was warned many times never to return. Twice my materials were destroyed. I managed to escape the dump’s security forces a number of times. Another woman journalist who came to film there wasn’t so lucky – both her camera and nose were broken.

But the people living there welcomed me warmly.

“We are like flies, like dogs, we are like roaches of society,” Olga, another protagonist in the film, told me.

I think my presence as someone from the world which had rejected them signalled the possibility to them that society could one day accept them again.

As a child, Yula played innocent games with the other children and with the toys found in the rubbish. She cracked jokes, listened to music and read magazines plucked from the trash. She listened to the radio to keep up with what was going on in the outside world.

She dyed her hair pink and wore makeup to look beautiful and glamorous and to briefly escape the dreariness of her life. All this – toys, clothes, makeup and hair dye – she’d find at the dump.

Yula once told me that the landfill used to be a source of hope for her.

“[It was] like the Pinocchio story: a field of wonder. There’s a pile of cookies here, a toy there.”

She explained that people came there after having nowhere else to go, and hoped for a better life but only ended up in misery.

“I lost everything here. I lost my mother [to alcoholism], my father, I lost all normal life here. Before it was a field of wonder, but now I see it is a field of fools,” she said as a teenager.

At 13, Yula had started drinking.

“It helps you forget that you had something in the past, maybe a normal life, and now you simply don’t have anything,” she told me.

The worst horror in the svalka was the rampant lack of hope. The place was like quicksand, dragging people deeper and deeper into despair – those who are sucked into this vortex of homelessness almost never managed to escape it. But Yula refused to live and die like so many others there.

WATCH: In Search of Putin’s Russia – Arising from the Rouble

Andrey and Yula at 16 [Mariusz Margas]

Escaping the garbage dump

At 16, Yula realised that she would never be able to have a normal life outside the svalka unless she found the strength to leave this vicious cycle of poverty, addiction, and hopelessness.

The first step was to find work outside the svalka. She learnt how to cut metal parts and make fences for the cemetery. The work was hard and dirty and badly paid, but with this job, she took her first step outside the rubbish dump.

She and her boyfriend Andrey – who was brought to the dump by his mother, who ended up dying there – managed to find cheap accommodation. He and Yula supported each other as best as they could.

Yula stopped drinking. She found seasonal work despite her lack of formal education.

And, just as Yula turned 21, she got one lucky break: she discovered that she was eligible for a government subsidy for housing because her father’s apartment was demolished.

She got her own apartment and on April 25, 2014, she gave birth to a baby girl, Eva.

What once seemed like an impossibility to Yula had become a reality, albeit not an easy one.

The apartment Yula owns is 300km away from Moscow and both she and Andrey can only find small jobs in the city. They travel between work and home, leaving Eva in the care of Yula’s mother, who now lives with them. The economic sanctions on Russia don’t make it easier – there is less work than there used to be and their wages have dropped. 

In July this year, Eva was diagnosed with a very serious disease – osteomyelitis – an extremely rare bone marrow infection, which has required several surgeries and constant medical attention. Eva now awaits more surgery and Yula has stopped working to care for her daughter full-time. Andrey struggles to find work.

The couple are barely able to pay the bills, let alone cover the mounting medical expenses for their daughter. Yula worries about losing her daughter, who remains seriously ill. She worries too about having to give up her apartment and being forced to return to the dump.

She told me she never thought “normal life would be so hard”.

As she faces another struggle, I think about what Yula told me when she just got her own apartment, when I asked her what she thought was unique about her.

“I don’t feel unique in any way …,” she had replied. “Well, perhaps in one way – if I am offered even the slightest opportunity, I will seize it and utilise it to the fullest.”

The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial policy.

For more insights into Yula’s life please visit the film’s website .

Hanna Polak is a Polish documentary filmmaker. Her film The Children of Leningradsky (2004) was nominated for an Oscar and two Emmy Awards. She is an advocate for improving the lives of homeless and underprivileged children and is a found of the Russian NGO   Active Child Aid .

Her film Something Better to Come is currently airing on Witness, Al Jazeera English.

  • Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers
  • Advertising & Talent Reach devs & technologists worldwide about your product, service or employer brand
  • OverflowAI GenAI features for Teams
  • OverflowAPI Train & fine-tune LLMs
  • Labs The future of collective knowledge sharing
  • About the company Visit the blog

Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Get early access and see previews of new features.

'heapdump.xxx.phd'. Not a HPROF heap dump (java.io.IOException) Not a HPROF heap dump

The Eclipse Memory Analyser docs say it can open IBM portable heap dump files (*.phd):

http://help.eclipse.org/luna/index.jsp?topic=/org.eclipse.mat.ui.help/welcome.html

However, when I try to open one I get and error:

I've tried both menu options (File > Open Heap Dump) and (File > Open File)

  • eclipse-memory-analyzer

DarVar's user avatar

2 Answers 2

You have to install DTJF in order to read IBM files.

http://wiki.eclipse.org/MemoryAnalyzer#System_Dumps_and_Heap_Dumps_from_IBM_Virtual_Machines

Eclipse download site is at the bottom here:

http://www.ibm.com/developerworks/java/jdk/tools/dtfj.html

Julius's user avatar

The eclipse MemoryAnalyzer throws the exception:

So I have to use IBM HeapAnalyzer: http://public.dhe.ibm.com/software/websphere/appserv/support/tools/HeapAnalyzer

Roc King's user avatar

Your Answer

Reminder: Answers generated by artificial intelligence tools are not allowed on Stack Overflow. Learn more

Sign up or log in

Post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy .

Not the answer you're looking for? Browse other questions tagged eclipse-memory-analyzer or ask your own question .

  • The Overflow Blog
  • Looking under the hood at the tech stack that powers multimodal AI
  • Featured on Meta
  • Join Stack Overflow’s CEO and me for the first Stack IRL Community Event in...
  • User activation: Learnings and opportunities
  • What does a new user need in a homepage experience on Stack Overflow?
  • Announcing the new Staging Ground Reviewer Stats Widget

Hot Network Questions

  • “…[it] became a ______ for me.” Why is "gift" the right answer?
  • My one-liner 'delete old files' command finds the right files but will not delete them
  • If a mount provokes opportunity attacks, can its rider be targeted?
  • What's "jam" mean in "The room reeled and he jammed his head down" (as well as the sentence itself)?
  • How am I supposed to solder this tiny component with pads UNDER it?
  • How to identify and uninstall packages installed as dependencies during custom backport creation?
  • What early 60s puppet show similar to fireball XL5 used the phrase "Meson Power?"
  • How to replicate the font and color in TOC of the documentation of class memoir?
  • string quartet + chamber orchestra + symphonic orchestra. Why?
  • Analytic continuation gives a covering space (and not just a local homeomorphism)
  • My team is not responsive to group messages and other group initiatives. What should be the appropriate solution?
  • How would you say "must" as in "Pet rabbits must be constantly looking for a way to escape."?
  • Why did early pulps make use of “house names” where multiple authors wrote under the same pseudonym?
  • PCB design references and roadmap
  • Smallest prime q such that concatenation (p+q)"q is a prime
  • Trouble with GeometricScene?
  • How to win a teaching award?
  • Why a relay frequently clicks when a battery is low?
  • Is it ethical to request partial reimbursement to present something from my previous position?
  • Cheapest / Most efficient way for a human Wizard to not age?
  • Why believe in the existence of large cardinals rather than just their consistency?
  • Is there a way to hide/show seams on model?
  • Was the total glaciation of the world, a.k.a. snowball earth, due to Bok space clouds?
  • Can noun phrases have only one word?

phd file heap dump

IMAGES

  1. How To Analyze A Heap Dump PHD File

    phd file heap dump

  2. PHD file extension

    phd file heap dump

  3. PHDファイルを開く、または変換する方法は?

    phd file heap dump

  4. Heap dump analysis using Eclipse Memory Analyzer Tool (MAT)

    phd file heap dump

  5. Heap Dump & Analysis

    phd file heap dump

  6. Heap Dump & Analysis

    phd file heap dump

VIDEO

  1. Marble Run ASMR Race ☆ HABA Slope & Dump Truck Excavator Ambulance Forklift Garbage Truck Tractors

  2. how to install ipsp

  3. Beyond BMI: Understanding Body Composition and Obesity

  4. Java Heap Space Analysis Indepth

  5. Performance testing

  6. July Dump! #july24 #indianinusa #lifeintexas #summervibes #ytshortsvideo #myvlog #fyp #explore

COMMENTS

  1. Heap dump

    General structure. The following structure comprises the header section of a PHD file: A UTF string indicating that the file is a portable heap dump; An int containing the PHD version number; An int containing flags:. 1 indicates that the word length is 64-bit.; 2 indicates that all the objects in the dump are hashed. This flag is set for heap dumps that use 16-bit hash codes.

  2. Creating and Analyzing Java Heap Dumps

    the Portable Heap Dump (PHD) format. PHD is the default format. The classic format is human-readable since it is in ASCII text, but the PHD format is binary and should be processed by appropriate tools for analysis. ... After running this command the heap dump file with extension hprof is created. The option live is used to collect only the ...

  3. How to analyse a .phd heap dump from an IBM JVM

    In the list below, an item should appear called IBM Monitoring and Diagnostic Tools. Tick the box next to it, click Next, and follow the wizard to accept the license agreements and install the toolkit. Restart Eclipse when prompted. Choose File -> Open Heap Dump and choose your .phd file.

  4. Differences Between Heap Dump, Thread Dump and Core Dump

    The classic format is human-readable, while the PHD is in binary and needs tools for further analysis. Also, PHD is the default for a heap dump. Moreover, modern heap dumps also contain some thread information. Starting from Java 6 update 14, a heap dump also contains stack traces for threads.

  5. heap dump

    According to this question, it is necessary to install DTJF on Eclipse Memory Analyzer. This link in the question says: Memory Analyzer can also read memory-related information from IBM system dumps and from Portable Heap Dump (PHD) files. For this purpose one just has to install the IBM DTFJ feature into Memory Analyzer version 0.8 or later.

  6. PHD Heapdump file format

    An int containing the length of the array of references. The array of references. For more information, see the description in the short record format. Portable Heap Dump (PHD) file format. PHD files can contain short, medium, and long object records, depending on the number of object references in the Heapdump.

  7. Locating and analyzing heap dumps

    For example, on the Windows operating system, the directory is: profile_root\myProfile. IBM® heap dump files are usually named in the following way: heapdump. <date>..<timestamp><pid>.phd. Gather all the .phd files and transfer them to your problem determination machine for analysis. Many tools are available to analyze heap dumps that include ...

  8. Heap dump analysis using Eclipse Memory Analyzer Tool (MAT)

    And the screenshots posted below are from the MAT plugin used with Eclipse IDE. The steps to load the heap dump are as follows. Open Eclipse IDE or the standalone MAT Tool. From the toolbar, Select Files > Open File from the dropdown menu. Open the heap dump file with the extension .hprof and you should see the overview page as shown below.

  9. Java VisualVM

    You can use Java VisualVM to browse the contents of a heap dump file and quickly see the allocated objects in the heap. Heap dumps are displayed in the heap dump sub-tab in the main window. You can open binary format heap dump files (.hprof) saved on your local system or use Java VisualVM to take heap dumps of running applications.

  10. Heap dump

    General structure. The following structure comprises the header section of a PHD file: A UTF string indicating that the file is a portable heap dump; An int containing the PHD version number; An int containing flags:. 1 indicates that the word length is 64-bit.; 2 indicates that all the objects in the dump are hashed. This flag is set for heap dumps that use 16-bit hash codes.

  11. PHD File

    Portable heap dumps can be generated by setting the following environment variable parameters: IBM_HEAP_DUMP=true and IBM_HEAPDUMP=true. PHD files may be saved in a text or a binary format. However, the binary format is much smaller in file size. To specify the text format, set the IBM_JAVA_HEAPDUMP_TEXT=true environment variable.

  12. Portable Heap Dump (PHD) file format

    The general structure of a PHD file consists of these elements: The UTF string portable heap dump.; An int containing the PHD version number.; An int containing flags: . A value of 1 indicates that the word length is 64-bit.; A value of 2 indicates that all the objects in the dump are hashed. This flag is set for Heapdumps that use 16-bit hashcodes.

  13. Acquiring Heap Dumps

    Portable Heap Dump (PHD) files generated with the Heap option can be compressed using the gzip compressor to reduce the file size. HPROF files can be compressed using the Gzip compressor to reduce the file size. A compressed file may take longer to parse in Memory Analyzer, and running queries and reports and reading fields from objects may ...

  14. Create a Heap Dump from a Native Executable

    The heap dump can be opened with the VisualVM tool, as illustrated below. Create a Heap Dump from within a Native Executable. The following example shows how to create a heap dump from a running native executable using VMRuntime.dumpHeap() if some condition is met. The condition to create a heap dump is provided as an option on the command line.

  15. PDF Building Advanced Coverage-guided Fuzzer for Program Binaries

    Fuzzer. Automated software testing technique to find bugs. Feed craft input data to the program under test. Monitor for errors like crash/hang/memory leaking. Focus more on exploitable errors like memory corruption, info leaking. Maximize code coverage to find bugs. Blackbox fuzzing. Whitebox fuzzing.

  16. Using Heapdump

    Text (classic) Heapdump file format The text or classic Heapdump is a list of all object instances in the heap, including object type, size, and references between objects. . Portable Heap Dump (PHD) file format A PHD Heapdump file contains a header, plus a number of records that describe objects, arrays, and classes.

  17. Minerals

    The authors describe the opportunities of low-grade sulfide ores and mine waste processing with heap and bacterial leaching methods. By the example of gold and silver ores, we analyzed specific issues and processing technologies for heap leaching intensification in severe climatic conditions. The paper presents perspectives for heap leaching of sulfide and mixed ores from the Udokan (Russia ...

  18. Growing up in Russia's biggest rubbish dump

    This svalka, known simply by its Russian term for rubbish dump, was the largest landfill in Europe. It lay only 20km from the Kremlin, in Putin's Russia, on the outskirts of the Russian capital ...

  19. 'heapdump.xxx.phd'. Not a HPROF heap dump (java.io.IOException) Not a

    Heap Dump - huge Size 10GB in Hprof format -I tried it with MAT, Jvisual VM and Jprofiler, but all application failed to load the file 2 Java heap dump file (.hprof) is much larger than heap size in eclipse MAT

  20. IBM Documentation

    IBM Documentation.

  21. 2022 University of Idaho killings

    In the early hours of November 13, 2022, four University of Idaho students were fatally stabbed in an off-campus residence in Moscow, Idaho. [1] On December 30, 28-year-old Bryan Christopher Kohberger was arrested in Monroe County, Pennsylvania, on four counts of first-degree murder and one count of felony burglary. [2] Prosecutors are seeking the death penalty. [3]