In my tests on a 1.2 GHz Intel Core m5 MacBook, with the new code compiled in the Release configuration (i.e. with optimization on), the above code runs in approximately approximately 0.5 milliseconds, compared to 30 milliseconds with your original code.
In my tests on a 1.2 GHz Intel Core m5 MacBook, the new code runs in approximately 0.5 milliseconds, compared to 30 milliseconds with your original code.
In my tests on a 1.2 GHz Intel Core m5 MacBook, with the code compiled in the Release configuration (i.e. with optimization on), the above code runs in approximately 0.5 milliseconds, compared to 30 milliseconds with your original code.
Now we define some initializers. The first one takes an array of digits,
which is truncated for a compact representation. In Swift 4.2 one can use lastIndex(where:)
instead, compare
In Swift Array, is there a function that returns the last index based in where clause?
on Stack Overflow.
The second initializer takes an Int
as is based on the first one.
One could also add a init(string: String)
initializer, but we don't need
that here.:
We also add a digitSum
computed property for obvious reasons.
The first one takes an array of digits, which is truncated for a compact representation. The
lastIndex(where:)
method has been added in Swift 4.2, see In Swift Array, is there a function that returns the last index based in where clause? on Stack Overflow for a possible approach in earlier Swift versions.The second initializer takes an
Int
and is based on the first one.We also add a
digitSum
computed property for obvious reasons.
So nowThen we have:
struct BigInt {
typealias Digit = UInt8
let digits: [Digit]
let negative: Bool
init(digits: [Digit], negative: Bool = false) {
if let idx = digits.reversed().indexlastIndex(where: { 0ドル != 0 }) {
self.digits = Array(digits[..<idx.base]idx])
} else {
self.digits = []
}
self.negative = negative
}
init(_ value: Int) {
var digits: [Digit] = []
var n = value.magnitude
while n > 0 {
digits.append(Digit(n % 10))
n /= 10
}
self.init(digits: digits, negative: value < 0)
}
var digitSum: Int {
return digits.reduce(0) { 0ドル + Int(1ドル) }
}
}
Currently, the digits
array contains the base-10 digits of the big integer.
If we choose a larger Digit
type and store more decimal digits in each
array element, then less multiplications are required, making the multiplication
more efficient.
- Currently, the
digits
array contains the base-10 digits of the big integer. If we choose a largerDigit
type and store more decimal digits in each array element, then less multiplications are required, making the multiplication more efficient.
Add a
init?(string: String)
initializer so that a big integer can be created withlet bn = BigInt(string: "713847891471894789471894789478917417439274")
Adopt the
ExpressibleByIntegerLiteral
protocol, so that a big integer can be created aslet bn: BigInt = 123
Add addition and subtraction to the
BigInt
type.If you feel courageous, add division and remainder calculation.
Now we define some initializers. The first one takes an array of digits,
which is truncated for a compact representation. In Swift 4.2 one can use lastIndex(where:)
instead, compare
In Swift Array, is there a function that returns the last index based in where clause?
on Stack Overflow.
The second initializer takes an Int
as is based on the first one.
One could also add a init(string: String)
initializer, but we don't need
that here.
We also add a digitSum
computed property for obvious reasons.
So now we have:
struct BigInt {
typealias Digit = UInt8
let digits: [Digit]
let negative: Bool
init(digits: [Digit], negative: Bool = false) {
if let idx = digits.reversed().index(where: { 0ドル != 0 }) {
self.digits = Array(digits[..<idx.base])
} else {
self.digits = []
}
self.negative = negative
}
init(_ value: Int) {
var digits: [Digit] = []
var n = value.magnitude
while n > 0 {
digits.append(Digit(n % 10))
n /= 10
}
self.init(digits: digits, negative: value < 0)
}
var digitSum: Int {
return digits.reduce(0) { 0ドル + Int(1ドル) }
}
}
Currently, the digits
array contains the base-10 digits of the big integer.
If we choose a larger Digit
type and store more decimal digits in each
array element, then less multiplications are required, making the multiplication
more efficient.
Now we define some initializers:
The first one takes an array of digits, which is truncated for a compact representation. The
lastIndex(where:)
method has been added in Swift 4.2, see In Swift Array, is there a function that returns the last index based in where clause? on Stack Overflow for a possible approach in earlier Swift versions.The second initializer takes an
Int
and is based on the first one.We also add a
digitSum
computed property for obvious reasons.
Then we have:
struct BigInt {
typealias Digit = UInt8
let digits: [Digit]
let negative: Bool
init(digits: [Digit], negative: Bool = false) {
if let idx = digits.lastIndex(where: { 0ドル != 0 }) {
self.digits = Array(digits[...idx])
} else {
self.digits = []
}
self.negative = negative
}
init(_ value: Int) {
var digits: [Digit] = []
var n = value.magnitude
while n > 0 {
digits.append(Digit(n % 10))
n /= 10
}
self.init(digits: digits, negative: value < 0)
}
var digitSum: Int {
return digits.reduce(0) { 0ドル + Int(1ドル) }
}
}
- Currently, the
digits
array contains the base-10 digits of the big integer. If we choose a largerDigit
type and store more decimal digits in each array element, then less multiplications are required, making the multiplication more efficient.
Add a
init?(string: String)
initializer so that a big integer can be created withlet bn = BigInt(string: "713847891471894789471894789478917417439274")
Adopt the
ExpressibleByIntegerLiteral
protocol, so that a big integer can be created aslet bn: BigInt = 123
Add addition and subtraction to the
BigInt
type.If you feel courageous, add division and remainder calculation.
The main performance bottleneck of your solution is the representation of big integers as a string. In each step of the factorial computation:
- The intermediate result is converted from a string to an integer array,
- the current factor is converted from
Int
to a string to an integer array, - a "long multiplication" is done on the two integer arrays, and finally,
- the resulting integer array is converted back to a string.
Also in every step, both input strings are validated to contain only decimal digits (plus an optional minus sign).
This is not very efficient, and I'll address this point later.
Instead of validating each possible combination of input strings x
and y
,
it is more simple to define both strings separately, using a common utility function:
func multiplyLong(_ x: String, _ y: String) -> String {
func isValidNumber(_ s: String) -> Bool {
guard let first = s.first else {
return false // String is empty
}
let rest = first == "-" ? s.dropFirst() : s[...]
return !rest.isEmpty && !rest.contains { !"0123456789".contains(0ドル) }
}
precondition(isValidNumber(x) && isValidNumber(y), "Valid numbers are required for multiplication")
// ...
}
Note also how "0123456789"
is used as a collection instead of
["0","1","2","3","4","5","6","7","8","9"]
, and that a conditional expression
<someCondition> ? false : true
can always be simplified to !<someCondition>
.
The multiplication algorithm can also be improved. Instead of computing
separate results for the product of one big integer with a single digit
of the other big integer (your lines
array), and "adding" those intermediate
results later, it is more efficient to accumulate all products into a single
array of result digits directly.
You also insert elements at the start each line
array repeatedly, which
requires moving all existing elements. This could be avoided by reversing
the order of elements in line
.
Finally note that the expression
(v - (v % 10)) / 10
which occurs at two places, can be simplified to v / 10
.
A better representation
All the conversion from strings to integer arrays and back to strings can be avoided if we represent a "big integer" not as a string, but as an integer array directly. Let's start by defining a suitable type:
struct BigInt {
typealias Digit = UInt8
let digits: [Digit]
let negative: Bool
}
digits
holds the decimal digits of the big integer, starting with the
least-significant one. UInt8
is large enough to hold decimal digits
(and all intermediate results during the long multiplication). For reasons
that become apparent later, we use a type alias instead of hard-coding
the UInt8
type.
Now we define some initializers. The first one takes an array of digits,
which is truncated for a compact representation. In Swift 4.2 one can use lastIndex(where:)
instead, compare
In Swift Array, is there a function that returns the last index based in where clause?
on Stack Overflow.
The second initializer takes an Int
as is based on the first one.
One could also add a init(string: String)
initializer, but we don't need
that here.
We also add a digitSum
computed property for obvious reasons.
So now we have:
struct BigInt {
typealias Digit = UInt8
let digits: [Digit]
let negative: Bool
init(digits: [Digit], negative: Bool = false) {
if let idx = digits.reversed().index(where: { 0ドル != 0 }) {
self.digits = Array(digits[..<idx.base])
} else {
self.digits = []
}
self.negative = negative
}
init(_ value: Int) {
var digits: [Digit] = []
var n = value.magnitude
while n > 0 {
digits.append(Digit(n % 10))
n /= 10
}
self.init(digits: digits, negative: value < 0)
}
var digitSum: Int {
return digits.reduce(0) { 0ドル + Int(1ドル) }
}
}
In order to print a big integer, we adopt the CustomStringConvertible
protocol:
extension BigInt: CustomStringConvertible {
var description: String {
if digits.isEmpty {
return "0"
}
return (negative ? "-" : "") + digits.reversed().map { String(0ドル) }.joined()
}
}
Now we can implement the multiplication. This is essentially your method of long multiplication, but simplified to accumulate all digits into a single result buffer (which is allocated only once):
extension BigInt {
func multiplied(with other: BigInt) -> BigInt {
var result = Array(repeating: Digit(0), count: digits.count + other.digits.count)
for (i, a) in digits.enumerated() {
var carry: Digit = 0
for (j, b) in other.digits.enumerated() {
let r = result[i + j] + a * b + carry
result[i + j] = r % 10
carry = r / 10
}
while carry > 0 {
let r = result[i + other.digits.count] + carry
result[i + other.digits.count] = r % 10
carry = r / 10
}
}
return BigInt(digits: result, negative: self.negative != other.negative)
}
static func *(lhs: BigInt, rhs: BigInt) -> BigInt {
return lhs.multiplied(with: rhs)
}
}
Note how the init(digits:negative:)
is used to compact the result array,
i.e. to remove unnecessary zero digits.
As a bonus, we implement the *
method so that big integers can be multiplied
more naturally.
With all that, the digit sum of 100!
is computed as
let sum = (1...100).reduce(BigInt(1), { 0ドル * BigInt(1ドル) }).digitSum
print(sum) // 648
In my tests on a 1.2 GHz Intel Core m5 MacBook, the new code runs in approximately 0.5 milliseconds, compared to 30 milliseconds with your original code.
Further suggestions
Currently, the digits
array contains the base-10 digits of the big integer.
If we choose a larger Digit
type and store more decimal digits in each
array element, then less multiplications are required, making the multiplication
more efficient.
With UInt64
as the Digit
type, one can perform multiplication of 9-digit
decimal numbers without overflow. This requires changing the Digit
type alias, and modifying all places in the code where base-10 arithmetic is done.