Skip to main content
Code Review

Return to Answer

added 25 characters in body
Source Link
Jamal
  • 35.2k
  • 13
  • 134
  • 238

First: The bottleneck of your code might not be where you think it is. I recommend reading the Performance Guidelines of Apple as well as the specific File-System Performance Guidelines .

Second: Typically the bottleneck is at accessing the drive. So in making your build-up of Dictionaries concurrent, you will not gain anything as there is only one drive. You are checking with NSFileManager on each item with fileExistsAtPath which might be the bottleneck. Try getting this information initially when building myArray. You're probably doing this also with Directory Enumerator. There are options to get specific metadata directly for an URL (like if it's a directory or folder) and this then will be cached in the NSURL (instead of working with path strings).

Third: Did you try to set some Breakpoints to find more details about EXC_BAD_ACCESS? Is it because of objects that were released too early? Or is it because of mutated while enumerating? Set a breakpoint on "All Exceptions" in Xcode and run the code with debugger. You will then be able to find more details on the crash.

Fourth: To isolate the crash, try to make the code in your block smaller. Remove all the code not really needed like this whole LR thing.

  1. The bottleneck of your code might not be where you think it is. I recommend reading the Performance Guidelines of Apple as well as the specific File-System Performance Guidelines .

  2. Typically the bottleneck is at accessing the drive. So in making your build-up of Dictionaries concurrent, you will not gain anything as there is only one drive. You are checking with NSFileManager on each item with fileExistsAtPath which might be the bottleneck. Try getting this information initially when building myArray. You're probably doing this also with Directory Enumerator. There are options to get specific metadata directly for an URL (like if it's a directory or folder) and this then will be cached in the NSURL (instead of working with path strings).

  3. Did you try to set some Breakpoints to find more details about EXC_BAD_ACCESS? Is it because of objects that were released too early? Or is it because of mutated while enumerating? Set a breakpoint on "All Exceptions" in Xcode and run the code with debugger. You will then be able to find more details on the crash.

  4. To isolate the crash, try to make the code in your block smaller. Remove all the code not really needed like this whole LR thing.

Last: II've analyzed the provided code which is not recursive. See my githubGitHub for the edits. Here are my findings (for a test directory with 27’861 nested items):

  • Most time was spent in the enumeration getting Filesystem metadata:

    Time Self Symbol Name 3568.0ms 53.6% -[SDAppDelegate createDirectoryStructure] 2680.0ms 40.2% -[NSURLDirectoryEnumerator nextObject]

     Time Self Symbol Name
    3568.0ms 53.6% -[SDAppDelegate createDirectoryStructure]
    2680.0ms 40.2% -[NSURLDirectoryEnumerator nextObject]
    

    The new code only fetches as much metadata as needed and reuses it by using NSURL.

  • also theThe code for filling the array also did lot'slots of duplicate checks:

    Time Self Symbol Name 2997.0ms 45.0% -[SDAppDelegate createArraysForLocalDirectories] 2109.0ms 31.6% -[SDAppDelegate addDictionaryItem:withURL:isDir:] 1121.0ms 16.8% -[NSArray containsObject:]

     Time Self Symbol Name
    2997.0ms 45.0% -[SDAppDelegate createArraysForLocalDirectories]
    2109.0ms 31.6% -[SDAppDelegate addDictionaryItem:withURL:isDir:]
    1121.0ms 16.8% -[NSArray containsObject:]
    

    The new code does it a bit simpler. It could be even simpler, see comment in code.

  • anotherAnother thing was Memory management, i. I removed the nested autoreleasepoolautoreleasepool, it's not really needed,see. See "Use Local Autorelease Pool Blocks to Reduce Peak Memory Footprint"

  • asAs for concurrency:, you were checking the mutable self.dictself.dict always if some key exists. ifIf you want to write to this dict, you have to synchronize the access to it. With with a lock. The simplest one is @synchronized() .

  • theThe improved code runs on the very same directory structure with the following times:

    Time Self Symbol Name 735.0ms 6.6% -[SDAppDelegate createDirectoryStructure] 268.0ms 2.4% -[SDAppDelegate createArraysForLocalDirectories]

     Time Self Symbol Name
    735.0ms 6.6% -[SDAppDelegate createDirectoryStructure]
    268.0ms 2.4% -[SDAppDelegate createArraysForLocalDirectories]
    
  • youYou could make a recursive code by using making a method that uses NSDirectoryEnumerationSkipsSubdirectoryDescendantsNSDirectoryEnumerationSkipsSubdirectoryDescendants and then call this method inside it again for directories. But that probably doesn't really speed it up.

First: The bottleneck of your code might not be where you think it is. I recommend reading the Performance Guidelines of Apple as well as the specific File-System Performance Guidelines .

Second: Typically the bottleneck is at accessing the drive. So in making your build-up of Dictionaries concurrent, you will not gain anything as there is only one drive. You are checking with NSFileManager on each item with fileExistsAtPath which might be the bottleneck. Try getting this information initially when building myArray. You're probably doing this also with Directory Enumerator. There are options to get specific metadata directly for an URL (like if it's a directory or folder) and this then will be cached in the NSURL (instead of working with path strings).

Third: Did you try to set some Breakpoints to find more details about EXC_BAD_ACCESS? Is it because of objects that were released too early? Or is it because of mutated while enumerating? Set a breakpoint on "All Exceptions" in Xcode and run the code with debugger. You will then be able to find more details on the crash.

Fourth: To isolate the crash, try to make the code in your block smaller. Remove all the code not really needed like this whole LR thing.

Last: I analyzed the provided code which is not recursive. See my github for the edits. Here my findings (for a test directory with 27’861 nested items)

  • Most time was spent in the enumeration getting Filesystem metadata:

    Time Self Symbol Name 3568.0ms 53.6% -[SDAppDelegate createDirectoryStructure] 2680.0ms 40.2% -[NSURLDirectoryEnumerator nextObject]

    The new code only fetches as much metadata as needed and reuses it by using NSURL.

  • also the code for filling the array did lot's of duplicate checks:

    Time Self Symbol Name 2997.0ms 45.0% -[SDAppDelegate createArraysForLocalDirectories] 2109.0ms 31.6% -[SDAppDelegate addDictionaryItem:withURL:isDir:] 1121.0ms 16.8% -[NSArray containsObject:]

    The new code does it a bit simpler. could be even simpler, see comment in code.

  • another thing was Memory management, i removed the nested autoreleasepool, it's not really needed,see "Use Local Autorelease Pool Blocks to Reduce Peak Memory Footprint"

  • as for concurrency: you were checking the mutable self.dict always if some key exists. if you want to write to this dict, you have to synchronize the access to it. With a lock. The simplest one is @synchronized()

  • the improved code runs on the very same directory structure with following times:

    Time Self Symbol Name 735.0ms 6.6% -[SDAppDelegate createDirectoryStructure] 268.0ms 2.4% -[SDAppDelegate createArraysForLocalDirectories]

  • you could make a recursive code by using making a method that uses NSDirectoryEnumerationSkipsSubdirectoryDescendants and then call this method inside it again for directories. But that probably doesn't really speed it up.

  1. The bottleneck of your code might not be where you think it is. I recommend reading the Performance Guidelines of Apple as well as the specific File-System Performance Guidelines .

  2. Typically the bottleneck is at accessing the drive. So in making your build-up of Dictionaries concurrent, you will not gain anything as there is only one drive. You are checking with NSFileManager on each item with fileExistsAtPath which might be the bottleneck. Try getting this information initially when building myArray. You're probably doing this also with Directory Enumerator. There are options to get specific metadata directly for an URL (like if it's a directory or folder) and this then will be cached in the NSURL (instead of working with path strings).

  3. Did you try to set some Breakpoints to find more details about EXC_BAD_ACCESS? Is it because of objects that were released too early? Or is it because of mutated while enumerating? Set a breakpoint on "All Exceptions" in Xcode and run the code with debugger. You will then be able to find more details on the crash.

  4. To isolate the crash, try to make the code in your block smaller. Remove all the code not really needed like this whole LR thing.

I've analyzed the provided code which is not recursive. See my GitHub for the edits. Here are my findings (for a test directory with 27’861 nested items):

  • Most time was spent in the enumeration getting Filesystem metadata:

     Time Self Symbol Name
    3568.0ms 53.6% -[SDAppDelegate createDirectoryStructure]
    2680.0ms 40.2% -[NSURLDirectoryEnumerator nextObject]
    

    The new code only fetches as much metadata as needed and reuses it by using NSURL.

  • The code for filling the array also did lots of duplicate checks:

     Time Self Symbol Name
    2997.0ms 45.0% -[SDAppDelegate createArraysForLocalDirectories]
    2109.0ms 31.6% -[SDAppDelegate addDictionaryItem:withURL:isDir:]
    1121.0ms 16.8% -[NSArray containsObject:]
    

    The new code does it a bit simpler. It could be even simpler, see comment in code.

  • Another thing was Memory management. I removed the nested autoreleasepool, it's not really needed. See "Use Local Autorelease Pool Blocks to Reduce Peak Memory Footprint"

  • As for concurrency, you were checking the mutable self.dict always if some key exists. If you want to write to this dict, you have to synchronize the access to it with a lock. The simplest one is @synchronized() .

  • The improved code runs on the very same directory structure with the following times:

     Time Self Symbol Name
    735.0ms 6.6% -[SDAppDelegate createDirectoryStructure]
    268.0ms 2.4% -[SDAppDelegate createArraysForLocalDirectories]
    
  • You could make a recursive code by using making a method that uses NSDirectoryEnumerationSkipsSubdirectoryDescendants and then call this method inside it again for directories. But that probably doesn't really speed it up.

feedback based on provided code in github.
Source Link

First: The bottleneck of your code might not be where you think it is. I recommend reading the Performance Guidelines of Apple as well as the specific File-System Performance Guidelines.

Second: Typically the bottleneck is at accessing the drive. So in making your build-up of Dictionaries concurrent, you will not gain anything as there is only one drive. You are checking with NSFileManager on each item with fileExistsAtPath which might be the bottleneck. Try getting this information initially when building myArray. You're probably doing this also with Directory Enumerator. There are options to get specific metadata directly for an URL (like if it's a directory or folder) and this then will be cached in the NSURL (instead of working with path strings).

Third: Did you try to set some Breakpoints to find more details about EXC_BAD_ACCESS? Is it because of objects that were released too early? Or is it because of mutated while enumerating? Set a breakpoint on "All Exceptions" in Xcode and run the code with debugger. You will then be able to find more details on the crash.

Fourth: To isolate the crash, try to make the code in your block smaller. Remove all the code not really needed like this whole LR thing.

Last: I could analyze it for you if you could provide a running example that I can launch in Xcodeanalyzed the provided code which is not recursive. I have quite some experience in digging thru filesystem and building hierarchical structuresSee :-my github for the edits . Here my findings (for a test directory with 27’861 nested items)

  • Most time was spent in the enumeration getting Filesystem metadata:

    Time Self Symbol Name 3568.0ms 53.6% -[SDAppDelegate createDirectoryStructure] 2680.0ms 40.2% -[NSURLDirectoryEnumerator nextObject]

    The new code only fetches as much metadata as needed and reuses it by using NSURL.

  • also the code for filling the array did lot's of duplicate checks:

    Time Self Symbol Name 2997.0ms 45.0% -[SDAppDelegate createArraysForLocalDirectories] 2109.0ms 31.6% -[SDAppDelegate addDictionaryItem:withURL:isDir:] 1121.0ms 16.8% -[NSArray containsObject:]

    The new code does it a bit simpler. could be even simpler, see comment in code.

  • another thing was Memory management, i removed the nested autoreleasepool, it's not really needed,see "Use Local Autorelease Pool Blocks to Reduce Peak Memory Footprint "

  • as for concurrency: you were checking the mutable self.dict always if some key exists. if you want to write to this dict, you have to synchronize the access to it. With a lock. The simplest one is @synchronized()

  • the improved code runs on the very same directory structure with following times:

    Time Self Symbol Name 735.0ms 6.6% -[SDAppDelegate createDirectoryStructure] 268.0ms 2.4% -[SDAppDelegate createArraysForLocalDirectories]

  • you could make a recursive code by using making a method that uses NSDirectoryEnumerationSkipsSubdirectoryDescendants and then call this method inside it again for directories. But that probably doesn't really speed it up.

First: The bottleneck of your code might not be where you think it is. I recommend reading the Performance Guidelines of Apple as well as the specific File-System Performance Guidelines.

Second: Typically the bottleneck is at accessing the drive. So in making your build-up of Dictionaries concurrent, you will not gain anything as there is only one drive. You are checking with NSFileManager on each item with fileExistsAtPath which might be the bottleneck. Try getting this information initially when building myArray. You're probably doing this also with Directory Enumerator. There are options to get specific metadata directly for an URL (like if it's a directory or folder) and this then will be cached in the NSURL (instead of working with path strings).

Third: Did you try to set some Breakpoints to find more details about EXC_BAD_ACCESS? Is it because of objects that were released too early? Or is it because of mutated while enumerating? Set a breakpoint on "All Exceptions" in Xcode and run the code with debugger. You will then be able to find more details on the crash.

Fourth: To isolate the crash, try to make the code in your block smaller. Remove all the code not really needed like this whole LR thing.

Last: I could analyze it for you if you could provide a running example that I can launch in Xcode. I have quite some experience in digging thru filesystem and building hierarchical structures :-)

First: The bottleneck of your code might not be where you think it is. I recommend reading the Performance Guidelines of Apple as well as the specific File-System Performance Guidelines.

Second: Typically the bottleneck is at accessing the drive. So in making your build-up of Dictionaries concurrent, you will not gain anything as there is only one drive. You are checking with NSFileManager on each item with fileExistsAtPath which might be the bottleneck. Try getting this information initially when building myArray. You're probably doing this also with Directory Enumerator. There are options to get specific metadata directly for an URL (like if it's a directory or folder) and this then will be cached in the NSURL (instead of working with path strings).

Third: Did you try to set some Breakpoints to find more details about EXC_BAD_ACCESS? Is it because of objects that were released too early? Or is it because of mutated while enumerating? Set a breakpoint on "All Exceptions" in Xcode and run the code with debugger. You will then be able to find more details on the crash.

Fourth: To isolate the crash, try to make the code in your block smaller. Remove all the code not really needed like this whole LR thing.

Last: I analyzed the provided code which is not recursive. See my github for the edits . Here my findings (for a test directory with 27’861 nested items)

  • Most time was spent in the enumeration getting Filesystem metadata:

    Time Self Symbol Name 3568.0ms 53.6% -[SDAppDelegate createDirectoryStructure] 2680.0ms 40.2% -[NSURLDirectoryEnumerator nextObject]

    The new code only fetches as much metadata as needed and reuses it by using NSURL.

  • also the code for filling the array did lot's of duplicate checks:

    Time Self Symbol Name 2997.0ms 45.0% -[SDAppDelegate createArraysForLocalDirectories] 2109.0ms 31.6% -[SDAppDelegate addDictionaryItem:withURL:isDir:] 1121.0ms 16.8% -[NSArray containsObject:]

    The new code does it a bit simpler. could be even simpler, see comment in code.

  • another thing was Memory management, i removed the nested autoreleasepool, it's not really needed,see "Use Local Autorelease Pool Blocks to Reduce Peak Memory Footprint "

  • as for concurrency: you were checking the mutable self.dict always if some key exists. if you want to write to this dict, you have to synchronize the access to it. With a lock. The simplest one is @synchronized()

  • the improved code runs on the very same directory structure with following times:

    Time Self Symbol Name 735.0ms 6.6% -[SDAppDelegate createDirectoryStructure] 268.0ms 2.4% -[SDAppDelegate createArraysForLocalDirectories]

  • you could make a recursive code by using making a method that uses NSDirectoryEnumerationSkipsSubdirectoryDescendants and then call this method inside it again for directories. But that probably doesn't really speed it up.

add breakpoint on exceptions
Source Link

First: The bottleneck of your code might not be where you think it is. I recommend reading the Performance Guidelines of Apple as well as the specific File-System Performance Guidelines.

Second: Typically the bottleneck is at accessing the drive. So in making your build-up of Dictionaries concurrent, you will not gain anything as there is only one drive. You are checking with NSFileManager on each item with fileExistsAtPath which might be the bottleneck. Try getting this information initially when building myArray. You're probably doing this also with Directory Enumerator. There are options to get specific metadata directly for an URL (like if it's a directory or folder) and this then will be cached in the NSURL (instead of working with path strings).

Third: ToDid you try to set some Breakpoints to find yourmore details about EXC_BAD_ACCESS? Is it because of objects that were released too early? Or is it because of mutated while enumerating? Set a breakpoint on "All Exceptions" in Xcode and run the code with debugger. You will then be able to find more details on the crash.

Fourth: To isolate the crash, try to make the code in your block smaller. Remove all the code not really needed like this whole LR equal `@"L" thing.

Last: I could analyze it for you if you could provide a running example that I can launch in Xcode. I have quite some experience in digging thru filesystem and building hierarchical structures :-)

First: The bottleneck of your code might not be where you think it is. I recommend reading the Performance Guidelines of Apple as well as the specific File-System Performance Guidelines.

Second: Typically the bottleneck is at accessing the drive. So in making your build-up of Dictionaries concurrent, you will not gain anything as there is only one drive. You are checking with NSFileManager on each item with fileExistsAtPath which might be the bottleneck. Try getting this information initially when building myArray. You're probably doing this also with Directory Enumerator. There are options to get specific metadata directly for an URL (like if it's a directory or folder).

Third: To find your crash, try to make the code in your block smaller. Remove all the code not really needed like this LR equal `@"L" thing.

Last: I could analyze it for you if you could provide a running example that I can launch in Xcode. I have quite some experience in digging thru filesystem and building hierarchical structures :-)

First: The bottleneck of your code might not be where you think it is. I recommend reading the Performance Guidelines of Apple as well as the specific File-System Performance Guidelines.

Second: Typically the bottleneck is at accessing the drive. So in making your build-up of Dictionaries concurrent, you will not gain anything as there is only one drive. You are checking with NSFileManager on each item with fileExistsAtPath which might be the bottleneck. Try getting this information initially when building myArray. You're probably doing this also with Directory Enumerator. There are options to get specific metadata directly for an URL (like if it's a directory or folder) and this then will be cached in the NSURL (instead of working with path strings).

Third: Did you try to set some Breakpoints to find more details about EXC_BAD_ACCESS? Is it because of objects that were released too early? Or is it because of mutated while enumerating? Set a breakpoint on "All Exceptions" in Xcode and run the code with debugger. You will then be able to find more details on the crash.

Fourth: To isolate the crash, try to make the code in your block smaller. Remove all the code not really needed like this whole LR thing.

Last: I could analyze it for you if you could provide a running example that I can launch in Xcode. I have quite some experience in digging thru filesystem and building hierarchical structures :-)

more details on question
Source Link
Loading
Source Link
Loading
lang-c

AltStyle によって変換されたページ (->オリジナル) /