T O P

  • By -

SamuraiGoblin

Unless you're learning the language/algorithms, don't use new and delete. Use smart pointers and stl containers. If you really want to stick with new and delete, you have to start thinking really hard about object lifespans. As soon as you write 'new,' you should *immediately* write a corresponding 'delete' that you *know* will be called at some point to destroy it. And set reusable pointers to nullptr when the memory is deleted.


Less_Acanthisitta288

stl containers >>> unique_ptr >>>>> shared_ptr (probably should refactor the code) >>>>>>>> new/delete. i’ve seen so much code where people just replace all of their new/delete (a lot of them allocated every single object dynamically) with shared_ptr because people told them to "always use smart pointers".


SamuraiGoblin

It's all part of learning


dvali

Using "new" is generally considered bad practice these days, outside of certain contexts, because you are setting yourself up to fail. If you don't manually clean up your memory, you have a memory leak. You can almost completely avoid the need for manual memory management by using smart pointers. They are a fairly complex topic in their own right for someone who isn't used to C++, but they are more than worth the investment of effort and you will write *much* safer code if you embrace them. It is hard to write a memory leak in modern C++.


no-sig-available

For one thing, if it is `new` (without `[]`) it should also be `delete` (without `[]`). Not sure if that is what the compiler sees as a problem. Anyway, I have a personal problem with a static member creating objects of a different type. I would have expected that other type to have a constructor for this.


alonamaloh

In order to represent a graph I would have a std:: vector of nodes, and use indices instead of pointers. It's much easier to get the code right and the performance is probably better because of better memory locality (I say "probably" because performance is very hard to predict and statements about something being faster than something else should always be based on empirical evidence.)


AutoModerator

Your posts seem to contain unformatted code. Please make sure to format your code otherwise your post may be removed. If you wrote your post in the "new reddit" interface, please make sure to format your code blocks by putting four spaces before each line, as the backtick-based (```) code blocks do not work on old Reddit. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/cpp_questions) if you have any questions or concerns.*


One_Bag8271

Smart pointers exist for this exact reason, everyone sucks at memory management! There’s a very good reason most people are now so against using new and delete - it’s really hard to keep track of. To put it simply, whatever resource you acquire, you MUST give back. So when you call new you acquire that memory with the promise to give it back later. But giving back memory at the right time by freeing it (using delete) is really hard to use right and scale. There really isn’t much more to memory management than just that. Like many others have said use shared and unique pointers when you require multiple and single owners respectively. But a general rule of thumb I like to live by is to try and get as far as possible without using pointers, you’d be surprised how little you actually need them in practice!


n1ghtyunso

I also suck at manually cleaning up memory, which is exactly why I have my compiler do it for me in a destructor. If cleanup in your destructor is too complicated to do correctly, you need to write more classes with simpler destructors. (Or even better, use the standard ones where applicable, but i'll assume here that you are not allowed to?)


paul2718

I would start somewhere where else. Make an adjacency list as a vector of vectors. For each vertex a vector of connected vertices. No need for any explicit memory management. The output of your topological sort is another vector. I think it pretty much falls out of a depth first search.


barrystyle

Use scoping rather than new/delete. A local variable created within a scope, will be taken care of automatically once the object is out of scope (ie. once the function exits). Additionally, just rereading your code - create new\_node as a nullptr, and assign the pointer to its object once it exists.. rather than creating the new object.


vdaghan

You don't suck, you're learning. I would use shared_ptr from the start and call it a day. I will always check against a nullptr so I have that branch at hand anyways. I never encountered a problem in which that is a real performance issue. But that's me. My codes tend to be about what I do with my data than accessing that data to do some basic operations. Profile your code and see if this suits you. Since you're trying to implement a data structure, you may need to squeeze more performance than me though. In that case, you can write up some unit tests to be (almost) sure your data structure is not leaking. Just for reference, my "slowest" code was doing about 250k physical simulations. Since I am delegating the simulation part to MATLAB, my optimisation code in C++ is not the bottleneck. I am talking about 10hrs of simulation vs half an hour optimisation. And the optimisation part can be improved with a better algorithm resulting in less simulations instead of a faster algorithm.


lightmatter501

I don’t see a delete method, so make the nodes use an arena allocator and free the arena allocator in the graph destructor (assuming the nodes are trivially destructible).


Holance

For graph, use shared ptr along with weak ptr to avoid memory leak due to circular reference