Extendible hash table–Destruction
After a long journey, I have an actual data structure implemented. I only lightly tested it, and didn’t really do too much with it. In fact, as it current stands, I didn’t even implement a way to delete the table. I relied on closing the process to release the memory.
It sounds like a silly omission, right? Something that is easily fixed. But I run into a tricky problem with implementing this. Let’s write the simplest free method we can:
Simple enough, no? But let’s look at one setup of the table, shall we?
As you can see, I have a list of buckets, each of them point to a page. However, multiple buckets may point to the same page. The code above is going to double free address 0x00748000!
I need some way to handle this properly, but I can’t actually keep track of whatever I already deleted a bucket. That would require a hash table, and I’m trying to delete one . I also can’t track it in the memory that I’m going to free, because I can’t access it after free() was called. So what to do?
I thought about this for a while, and I came up with the following solution.
What is going on here? Because we may have duplicates, we first sort the buckets. We want to sort them by the value of the pointer. Then we simply scan through the list and ignore the duplicates, freeing each bucket only once.
There is a certain elegance to it, even if the qsort() usage is really bad, in terms of ergonomics (and performance).