Amdahl's law in reverse: the wimpy core advantage

Once a chip’s single-core performance lags by more than a factor to two or so behind the higher end of current-generation commodity processors, making a business case for switching to the wimpy system becomes increasingly difficult.

– Google's Urs Hölzle, in "Brawny cores still beat wimpy cores, most of the time"

Google sure knows its own business, so they're probably right when they claim that they need high serial performance. However, different businesses are different, and there are advantages to "wimpy cores" beyond power savings which are omitted from the "brawny/wimpy" paper and which are worth mentioning.

It is a commonplace assumption that a single 3 GHz processor is always better than 4 processors at 750 MHz "because of Amdahl's law". Specifically:

  • Some of your tasks can be parallelized to run on 4 cores – these will take the same time to complete on the two systems.
  • Other tasks can only run on one core – these will take 4x more time to complete on the 750 MHz wimpy-core system.
  • Some tasks are in-between – say, a task using 2 cores will take 2x more time to complete on the 750 MHz system.

Overall, the 3 GHz system never completes later and sometimes completes much earlier.

However, this assumes that a 3 GHz processor is consistently 4x faster than a 750 MHz processor. This is true in those rare cherished moments when both processors are running at their peak throughput. It's not true if both are stuck waiting for a memory request they issued upon a cache miss. For years, memory latency has been lagging behind memory throughput, and the 4x faster processor does not get a 4x smaller memory latency.

Assuming the latency is the same on the two systems – which is often very close to the truth – and that half of the time of the 750 MHz system is spent waiting for memory when running a serial task, the 3 GHz CPU will give you a speed-up of less than 2x.

What if the task can run in parallel on 4 cores? Now the 750 MHz system gets a 4x speed-up and completes more than 2x faster than the 3 GHz system!

(This assumes that the 4 750 MHz cores are not slowed down due to them all accessing the same memory – which, again, is very close to the truth. Memory bandwidth is usually high enough to serve 4 streams of requests – it's latency which is the problem. So having several cores waiting in parallel is faster than having one core waiting serially.)

A slow, parallel system outperforming a fast, serial system – Amdahl's law in reverse!

Is memory latency unique?

In a way it isn't – there are many cases where faster systems fail to get to their peak throughput because of some latency or other. For instance, the cost of mispredicted branches will generally be higher on faster systems.

However, most other latencies of these kinds are much smaller – and are handled rather well by the mechanisms making "brawny" cores larger and deserving of their name. For example, hardware branch predictors and speculative execution can go a long way in reducing the ultimate cost of mispredicted branches.

"Brawny" speculative hardware is useful enough to deal with memory latency as well, of course. Today's high-end cores all have speculative memory prefetching. When you read a contiguous array of memory, the core speculates that you'll keep reading, fetches data ahead of the currently processed elements, and when time comes to process the next elements, they're already right there in the cache.

The problem is that this breaks down once you try to use RAM as RAM - a random access memory where you don't go from index 0 to N but actually "jump around" randomly. (Theoretically this isn't different from branch prediction; in practice, large, non-contiguous data structures not fitting into caches and "unpredictable" for prefetchers are much more common than unpredictable branches or overflowing the branch history table.)

Hence we have pledges like this computer architect's kind request to stop using linked lists:

Why would anyone use a linked-list instead of arrays? I argue that linked lists have become irrelevant in the context of a modern computer system:

1. They reduce the benefit of out-of-order execution.

2. They throw off hardware prefetching.

3. They reduce DRAM and TLB locality.

4. They cannot leverage SIMD.

5. They are harder to send to GPUs.

Note that problems 1, 2 and 4 "disappear" on wimpy cores – that is, such cores don't have out-of-order execution, hardware prefetchers or SIMD instructions.  So a linked list doesn't result in as many missed performance opportunities as it does on brawny cores.

"Why would anyone use linked lists"? It's not just lists, for starters – it's arrays of pointers as well. "Why array of pointers" seems exceedingly obvious – you want to keep references instead of values to save space as well as to update something and actually get the thing updated, not its copy. Also you can have elements of different types – very common with OO code keeping arrays of base class pointers.

And then using arrays and indexing into them instead of using pointers still leaves you with much of the "access randomness" problem. A graph edge can be a pointer or an index; and while mallocing individual nodes might result in somewhat poorer locality than keeping them in one big array, you still bump into problems 1, 2 and 4 for big enough node arrays – because your accesses won't go from 0 to N.

Of course you could argue that memory indirection is "poor design" in the modern world and that programmers should mangle their designs until they fit modern hardware's capabilities. But then much of the "brawny vs wimpy" paper's argument is that brawny cores mean smaller development costs – that you get high performance for little effort. That's no longer true once the advice becomes to fight every instance of memory indirection.

It's also somewhat ironic from a historical perspective, because in the first place, we got to where we are because of not wanting to change anything in our software, and still wishing to get performance gains. The whole point of brawny speculative hardware is, "your old code is fine! Just give me your old binaries and I'll magically speed them up with my latest and greatest chip!"

The upshot is that you can't have it both ways. Either keep speculating (what, not brawny/brainy enough to guess where the next pointer is going to point to?), or admit that speculation has reached its limits and you can no longer deliver on the promises of high serial performance with low development costs.

Formalizing "reverse Amdahl's law"

…is not something I'd want to do.

Amdahl's law has a formal version where you take numbers representing your workload and you get a number quantifying the benefits a parallel system might give you.

A similar formalization could be done for "reverse Amdahl's law", taking into account, not just limits to parallelization due to serial bottlenecks (what Amdahl's law does), but also "limits to serialization"/the advantage to parallelization due to various kinds of latency which is better tolerated by parallel systems.

But I think the point here is that simple formal models fail to capture important factors – not that they should be made more complex with more input numbers to be pulled out of, erm, let's call it thin air. You just can't sensibly model real programs and real processors with a reasonably small bunch of numbers.

Speaking of numbers: is time spent waiting actually that large?

I don't know; depends on the system. Are there theoretically awesome CPUs stuck waiting for various things? Absolutely; here's that article about 6% CPU utilization in data servers (it also says that this is viewed as normal by some – as long as RAM and networking resources are utilized. Of course, by itself the 6% figure doesn't mean that brawny cores aren't better – if those 6% are all short, serial sprints letting you complete requests quickly, then maybe you need brawny cores. The question is how particular tasks end up being executed and why.)

Could a "wimpy-core" system improve things for you? It depends. I'm just pointing out why it could.

From big.LITTLE to LITTLE.big, or something

There's a trend of putting a single wimpy core alongside several brawny ones, the wimpy core being responsible for tasks where energy consumption is important, and the brawny ones being used when speed is required.

An interesting alternative is to put one brawny core and many wimpy ones – the brawny core could run the serial parts and the wimpy cores could run the parallel parts. If the task is serial, then a brawny core can only be better – perhaps a lot better, perhaps a little better, but better. If the task is parallel, many wimpy cores might be better than few brawny ones.

(I think both use cases fall under ARM's creatively capitalized "big.LITTLE concept"; if the latter case doesn't, perhaps LITTLE.big could be a catchy name for this – or we could try other capitalization options.)

Hyperthreading/Barrel threading/SIMT

…is also something that helps hide latency using coarse-grain, thread-level parallelism; it doesn't have to be "whole cores".

Parallelizing imperative programs

…is/can be much easier and safer than is commonly believed. But that's a subject for a separate series of posts.

Summary

  • Faster processors are exactly as slow as slower processors when they're stalled – say, because of waiting for memory;
  • Many slow processors are actually faster than few fast ones when stalled, because they're waiting in parallel;
  • All this on top of area & power savings of many wimpy cores compared to few brawny ones.

80 comments ↓

#1 Stephen on 02.22.13 at 6:39 am

You didn't explicitly say it but GPUs are prime examples of when wimpy wins.

#2 Aaron on 02.22.13 at 9:18 am

@Stephen – or, GPUs are a prime example where becoming *less* wimpy wins. Shader instruction sets have been progressing toward less wimpiness, greater generality.

#3 Alex on 02.22.13 at 9:26 am

@Aaron
Less wimpy than previous gpu cores certainly, but still far more so than modern x86 chips. I guess there's an optimal amount of wimp/brawn for a given task, and currently gpu's are on the wimp side…

#4 Jeremiah on 02.22.13 at 11:04 am

Sun tried this strategy with Niagra based UltraSPARC CPUs I believe, arguing that having 64 logical threads at 1Ghz resulted in more consistent latency, so that 5% of your web requests don't get insanely high latency in response on a massively oversubscribed system. This strategy didn't get a whole lot of industry uptake, although I can't say if it was due to the strategy itself or due to aversion to SPARC / Solaris / etc.

#5 Tom on 02.22.13 at 11:18 am

This argument is predicated on the notion that the workload can be naturally parallelized. This may not be true: for example, consider that most financial quote data is represented as a stream of ordered events which alter the state of the "world". Also, workloads which _can_ be parallelized must be done in a way that does not introduce false sharing, which is only as easy as the problem domain and implementation language allow. Your assertion that parallelism is easier to achieve than most people believe sounds… well… tough to swallow. Outside of embarassingly parallel workloads, adding parallelism adds another dimension to analyze. If people are already skipping out on memory-hierarchy optimizations, how can we expect that they'll suddenly pick up the ball on parallelism optimizations?

Regarding hardware prefetching: If an object is larger than a single cache-line, then brawny prefetching can still be useful for latency-hiding. If a series of objects was allocated in sequence (or nearly so), then it is likely that they will appear contiguously in memory, and again brawny prefetching can help. Memory arenas and pooled allocations can help in this regard, by keeping your virtual address space more compact than your pointer-chains would suggest.

Regarding branch misprediction: Intel learned their lesson from the P4: deep pipelines make branch misprediction expensive. The newest Intel hardware has a much lower cost to misprediction. Furthermore, brawny hardware with conditional instructions can eliminate the misprediction entirely. I don't see an advantage to wimpy hardware here, though I do see a benefit for otherwise-wimpy hardware that is just brawny enough to provide a set of conditional instructions.

Regarding SIMD absence from wimpy cores: Effective use of SIMD should be providing a speedup linear in the size of the SIMD "word". Replacing a 16-way 8-bit integer calculation with 16 separate threads all doing 8-bit integer calculations sounds like a pretty raw deal. If would only make sense if the computation wasn't suitable (or was borderline suitable) for SIMD in the first place.

#6 Wiktor on 02.22.13 at 11:51 am

Sun Niagara processors had extremely slow cores when doing single threaded tasks (4-8x slower than x86 processor clocked at ~2GHz). That might be the reason, why it didn't catch up in Industry, where still from time to time you need decent single-threaded performance.

#7 Giga on 02.22.13 at 12:45 pm

There's a name for "reverse Amdahl's law" – it is Little's law:
http://en.wikipedia.org/wiki/Little's_law

needed parallelism = latency * throughput

This is nicely explained in the contect of GPU processing here:
http://www.cs.berkeley.edu/~volkov/volkov10-GTC.pdf

#8 Bilkokuya on 02.22.13 at 3:06 pm

@Alex
I think that definitely brings up the additional point on wimpy performance gains where the number of independent tasks greatly exceeds the complexity of each task.

#9 Yossi Kreinin on 02.22.13 at 8:44 pm

"Your assertion that parallelism is easier to achieve than most people believe sounds… well… tough to swallow."

Just wait a bit for the upcoming posts :) (As to embarrassingly parallel… depends how easily one is embarrassed :)

#10 Yossi Kreinin on 02.22.13 at 9:07 pm

@Giga: thanks for the link! Interesting stuff from there: on one hand, ILP is sometimes a better way to hide latency on GPUs than TLP; on the other hand, ILP is limited (by 4 in their experiments on GPUs), the reason presumably being that keeping multiple independent instructions in flight uses up rather limited hardware scheduling resources. (Of course TLP is limited as well – by the number of available registers/register sets; I think it's more easily scalable though – perhaps that's the reason for the "lies" by CUDA manuals that they site – that is, NVIDIA is OK with committing to high occupancy being beneficial on their machines but not to ILP, an example of that being the reduction of #registers/thread in Fermi that they call a "regression").

#11 Wormhole on 02.23.13 at 4:51 am

This is Amdahl's law applied to processor speed.

From Wikipedia:
"Amdahl's law [...] is used to find the maximum expected improvement to an overall system when only part of the system is improved."

In this case, the brawnier core has increased processing speed, but the same memory latency, so the speedup for your specific task is 1/(0.5 + 0.5/4) = 1.6

Bringing in multi-core just confuses the point, IMO. You now know that the bigger core is 1.6 times faster for your workload, so if you presuppose that the tasks are parallelisable, then obviously 4 > 1.6. The reason the big core is 1.6 times faster does not matter. The small cores are not just "waiting in parallel", they are doing everything in parallel.

(Then again, if the loads are 4-way parallelisable, why did the big core have to do them in sequence? The programmer could at least have thrown in some preloads!)

#12 Wormhole on 02.23.13 at 4:51 am

This is Amdahl's law applied to processor speed.

From Wikipedia:
"Amdahl's law [...] is used to find the maximum expected improvement to an overall system when only part of the system is improved."

In this case, the brawnier core has increased processing speed, but the same memory latency, so the speedup for your specific task is 1/(0.5 + 0.5/4) = 1.6

Bringing in multi-core just confuses the point, IMO. You now know that the bigger core is 1.6 times faster for your workload, so if you presuppose that the tasks are parallelisable, then obviously 4 > 1.6. The reason the big core is 1.6 times faster does not matter. The small cores are not just "waiting in parallel", they are doing everything in parallel.

(Then again, if the loads are 4-way parallelisable, why did the big core have to do them in sequence? The programmer could at least have thrown in some preloads!)

#13 nit on 02.23.13 at 1:50 pm

"embarrassingly parallel" has a distinct technical meaning, it's when you get a slight superlinear speedup, typically because of caches. (if you get a bigger speed up than, say 10%, your serial version is probably screwed up.)

#14 GD on 02.24.13 at 11:54 pm

Maybe another counter point is, I think that the 'brawnier' chips have also much brawnier level 2/3 caches, so in the end it is rather rare/hypothetical case where you can can get advantage using many wimpier chips. Also when you have 4 independent core accessing memory in parallel you do tend to have more randomness than one core, specially in the case where the original access pattern was not that random. Now naturally when your performance is I/O bound such as in servers, you need to have some task parallelism in the software layer (having many independent threads doing asynchronous i/o) but that may be implemented quite efficiently by a single hardware core (upto a point obviously). Also the failure of traditional hyper threading, (which may be an example of trying to wait in parallel in order to hide latency) to get a marked speedup in real life also suggests there is limited utility in this direction.

#15 Yossi Kreinin on 02.25.13 at 1:48 am

The failure of HT may point to software not being written to exploit HT; normally hyperthreads are used to run two separate processes with zero data sharing, and there's not much multi-threaded, single-process software out there.

As to L2/L3 – you pay quite a penalty for missing L1 and hitting these instead; better than accessing DRAM of course but still quite a penalty, so the argument still applies to an extent.

You and I have a lot to discuss about getting speedups from HT in real life :)

#16 Tom on 03.01.13 at 9:39 pm

Even if software is written to exploit HT, there are still some limitations. Intel's software optimization manuals go into this in quite some detail, but they are extremely dense material. Some of the basic limitations:
- Newer hardware uses a "micro-op cache" which caches pre-decoded instruction. If HT is diabled, every CPU gets double the uop cache. A HT pair of CPUs will not compete for this cache; instead, each gets half of the cache available.
- HT CPUs still share instruction decoding, which is cited frequently in the manual as a common bottleneck. So much software will see additional latency due to instruction decoding. It's not simply a matter of optimizing the software to run on HT hardware: instruction decoding can be a bottleneck even without the alternating HT dispatch.

Both of these limitations come with the fact that HT is designed to double-up on some pieces of hardware (execution units are easy to parallelize, as are registers) and share other pieces of hardware (instruction decode, caches). Right off the bat, this means that efficient HT use is necessarily specialized. For high-performance software, it will remain a niche because IMHO, there is very little benefit to try to optimize software for such a specific arrangement (where shared L1/L2 caches fit into the problem domain).

#17 Yossi Kreinin on 03.02.13 at 1:25 am

Let's put it this way: HT is better for worker threads sharing code and some of the data than for unrelated tasks.

Anyway, I'm not recommending HT for any particular case, just pointing out one nice thing about it.

#18 Assaf Lavie on 03.05.13 at 12:42 pm

Fantastic post, Yossi. Thank you.

#19 Yossi Kreinin on 03.05.13 at 11:24 pm

Glad you like it; I should add that there's a lot of weight in all the counter-arguments people brought up in the comments…

#20 KJS3 on 12.17.13 at 11:42 am

@Jeremiah: Sun tried this strategy with Niagra based UltraSPARC CPUs I believe, arguing that having 64 logical threads at 1Ghz resulted in more consistent latency, so that 5% of your web requests don't get insanely high latency in response on a massively oversubscribed system.

This is true, but it works for more than one type of workload. For example, last I looked at the benchmarks, a t2000 was an order of magnitude faster than a contemporary dual Opteron server on MySQL workloads once a sufficient number of threads were engaged. So, clearly, if you have workloads that match the architecture, you get big wins (the fact that people ran minimally parallel benchmarks and declared Niagara a dud notwithstanding).

@Jeremiah: This strategy didn't get a whole lot of industry uptake, although I can't say if it was due to the strategy itself or due to aversion to SPARC / Solaris / etc.

I'm not sure if I agree with that. Niagra is on it's 5th major architecture refresh. Certainly AMD Bulldozer/Piledriver is a wimpy-core architecture (though it's success is yet to be proven). More esoteric things such as Parallella are wimpy-core, and as noted GPUs arguably are. So I think there's significant industry uptake.

#21 Jouni Osmala on 09.12.14 at 5:29 pm

Amdahl's law means more than just threaded vs non threaded work.
Its total_time= Execution_time_affected / improvement + time_unaffected .

And the first example I saw about it was about trick question on how much should improve multiplier if it takes 80% of execution time and you want to run the program five times faster.

Now the Amdah's law applied to process. Transistor delay has been exponentially reduced while wire delay being kept constant. ->at some point transistor delay doesn't apply but at first total delay reduction of cycle time is exponential if transistor delay dominates.

Then same thing about adding another ALU that gets rarely used and so on. Amdahl's law totally killed single threaded improvement almost decade ago, and we are going for 10% yearly improvements nowadays.

#22 HST-3000 VDSL on 04.01.19 at 1:15 am

After checking out a handful of the blog articles on your site,
I honestly appreciate your way of writing a blog. I book marked
it to my bookmark website list and will be checking back in the near future.
Please check out my website too and tell me what you think.

#23 https://www.nuttag.com.au on 04.06.19 at 9:38 pm

Hey there! I've been reading your website for some time now and finally got the bravery to go ahead and give you a
shout out from Kingwood Tx! Just wanted to mention keep up the great work!

#24 Fluid Health on 04.06.19 at 9:48 pm

hello!,I love your writing very so much! percentage we keep in touch more about your post on AOL?
I require a specialist in this area to solve my
problem. Maybe that's you! Having a look ahead to
look you.

#25 dentist brisbane @1932 logan rd upper Mt gravatt 4122 on 04.06.19 at 10:55 pm

Fascinating blog! Is your theme custom made or did you download
it from somewhere? A theme like yours with a few simple tweeks would really make
my blog stand out. Please let me know where you got
your design. Thanks

#26 Http://Minttours.Com on 04.06.19 at 11:31 pm

What i do not realize is if truth be told how you're no longer really much more neatly-liked than you may be now.
You are so intelligent. You understand therefore considerably
in terms of this matter, made me for my part believe
it from a lot of varied angles. Its like men and women aren't interested except it is one
thing to do with Woman gaga! Your own stuffs outstanding.
At all times deal with it up!

#27 Christel on 04.07.19 at 3:52 am

Thank you for the auspicious writeup. It in fact was a amusement account it.
Look advanced to far added agreeable from you! However, how
could we communicate?

#28 www.coulterroache.com.au on 04.07.19 at 4:57 am

My spouse and I stumbled over here from a different web page
and thought I might as well check things out. I like what
I see so now i'm following you. Look forward to looking over your
web page yet again.

#29 Aqua Prep on 04.07.19 at 5:35 am

Appreciate the recommendation. Will try it out.

#30 http://standbysecurity.com.au on 04.07.19 at 5:38 am

Thank you for the auspicious writeup. It in fact was a amusement account it.
Look advanced to more added agreeable from you! However, how can we communicate?

#31 Nexergy on 04.07.19 at 7:13 am

Hi, i think that i saw you visited my website so i came to “return the favor”.I'm
trying to find things to enhance my website!I suppose its ok to use some of your ideas!!

#32 https://iottag.com.au on 04.09.19 at 7:20 am

I do trust all the concepts you've introduced in your post.
They're really convincing and will definitely work. Still, the posts
are too short for beginners. Could you please extend them a bit from subsequent time?
Thank you for the post.

#33 New F6121 on 04.12.19 at 12:00 pm

This is really interesting, You're a very skilled
blogger. I have joined your rss feed and look forward to seeking more of your wonderful post.
Also, I've shared your site in my social networks!

#34 http://fiberopticstore.net/pdfs/1D32.pdf on 04.13.19 at 9:36 pm

Wow, fantastic blog structure! How long have you
ever been running a blog for? you make blogging
glance easy. The whole glance of your website
is great, as neatly as the content!

#35 Y350C on 04.14.19 at 5:14 am

There is certainly a lot to learn about this topic. I really like all the points you have made.

#36 srqmovers.com on 04.15.19 at 5:54 am

As the admin of this web page is working,
no doubt very quickly it will be renowned, due to
its quality contents.

#37 TP650 on 04.15.19 at 10:43 pm

I read this piece of writing fully on the topic of the comparison of
latest and earlier technologies, it's amazing article.

#38 TKT-UNICAM- PFC on 04.17.19 at 6:36 pm

Hurrah! Finally I got a webpage from where I know how to in fact obtain valuable data concerning
my study and knowledge.

#39 Eczeem on 04.22.19 at 8:01 pm

Keep up the good work! Thanks.

#40 DMC5 on 04.24.19 at 9:40 am

I was very pleased to discover this website. I wanted to thank you for ones time due to this wonderful read!! I definitely loved every little bit of it and i also have you book marked to check out new things on your site.

#41 Buy FC-120 on 04.27.19 at 10:08 pm

Hi! I could have sworn I've been to this website before but after reading through some
of the post I realized it's new to me. Nonetheless, I'm definitely glad I found it and I'll be bookmarking
and checking back often!

#42 30153 Power & Data Cable Software on 04.27.19 at 11:28 pm

Great weblog right here! Additionally your web site quite a bit up fast!
What host are you the use of? Can I get your associate hyperlink in your host?
I wish my site loaded up as fast as yours lol

#43 Sell Leica iCON grade iGx2 on 04.28.19 at 12:25 am

Hi there! This is my first comment here so I just wanted to give a quick shout out and tell you I really enjoy reading through your articles.
Can you recommend any other blogs/websites/forums that cover the same subjects?
Thanks a lot!

#44 Buy GTS-753 on 05.03.19 at 7:40 am

It is appropriate time to make a few plans for the longer term and it is time to be happy.
I've learn this submit and if I may just I wish to recommend you
some fascinating issues or advice. Maybe you could write next articles regarding this
article. I want to learn more issues approximately it!

#45 moving packing services on 05.04.19 at 4:52 pm

Quality articles or reviews is the important to attract the people to go
to see the website, that's what this web site is providing.

#46 Trimble Geo XT 3000 Series on 05.08.19 at 6:55 pm

I simply could not leave your site before suggesting that I really loved the standard info an individual provide on your visitors?
Is going to be back often in order to check out new posts

#47 Topcon FC-2600 Manual on 05.10.19 at 5:43 am

Hi, everything is going perfectly here and ofcourse every
one is sharing information, that's really good, keep up writing.

#48 laserslevels.net on 05.11.19 at 9:07 am

You need to take part in a contest for one of the finest sites on the web.

I'm going to highly recommend this web site!

#49 Focus on 05.13.19 at 4:50 pm

It's appropriate time to make some plans for the future and it's time to
be happy. I have learn this post and if I may I want to
suggest you few interesting things or tips.
Perhaps you could write next articles referring to this article.
I desire to read even more issues about it!

#50 TX8 on 05.13.19 at 9:39 pm

Oh my goodness! Incredible article dude! Many thanks, However
I am having problems with your RSS. I don't know the reason why I can't subscribe to it.
Is there anybody else having the same RSS problems?
Anybody who knows the solution will you kindly respond?
Thanx!!

#51 Apex Legends on 05.15.19 at 1:37 am

An outstanding share! I've just forwarded this onto a coworker who has been doing a little research on this. And he actually bought me dinner because I found it for him… lol. So allow me to reword this…. Thanks for the meal!! But yeah, thanx for spending time to talk about this matter here on your blog.

#52 cheatbreaker download on 05.15.19 at 1:44 pm

I conceive you have mentioned some very interesting details , appreciate it for the post.

#53 Topcon FC-120 Brochure on 05.16.19 at 5:56 am

It's very simple to find out any topic on web as compared to textbooks, as I found
this paragraph at this web site.

#54 vn hax on 05.16.19 at 12:15 pm

I was looking at some of your articles on this site and I believe this internet site is really instructive! Keep on posting .

#55 aimbot fortnite on 05.16.19 at 4:09 pm

Enjoyed examining this, very good stuff, thanks .

#56 Topcon DL-503 on 05.16.19 at 5:54 pm

Hello there! I could have sworn I've visited this site before but after browsing
through many of the posts I realized it's new to me.
Anyways, I'm definitely delighted I came across it and I'll be book-marking it and checking back
often!

#57 Leica Rugby 320SG PDF on 05.16.19 at 8:58 pm

I'm not sure why but this weblog is loading extremely slow
for me. Is anyone else having this issue or is it a issue
on my end? I'll check back later on and see if the
problem still exists.

#58 Dominic Radzavich on 05.17.19 at 5:55 am

yosefk.com does it yet again! Very informative site and a thought-provoking post. Thanks!

#59 nonsense diamond key on 05.17.19 at 6:22 am

Thank You for this.

#60 fallout 76 cheats on 05.17.19 at 9:50 am

Some truly interesting posts on this web site , appreciate it for contribution.

#61 red dead redemption 2 digital key resale on 05.17.19 at 3:00 pm

Enjoyed reading through this, very good stuff, thankyou .

#62 FX-101 Data Sheet on 05.17.19 at 5:13 pm

Greetings I am so excited I found your weblog, I really
found you by error, while I was researching on Askjeeve for something else, Regardless I am here
now and would just like to say kudos for a fantastic post
and a all round interesting blog (I also love the theme/design), I don’t have time to read through it all
at the minute but I have book-marked it and
also added in your RSS feeds, so when I have time I will be back to read much more, Please do keep up
the excellent jo.

#63 nagels breda on 05.17.19 at 5:39 pm

What's up, all the time i used to check website posts here in the early hours in the morning,
because i love to gain knowledge of more and more.

#64 redline v3.0 on 05.17.19 at 6:03 pm

Cheers, here from yanex, i enjoyng this, I come back soon.

#65 GX Data Sheet on 05.18.19 at 5:06 am

I like the helpful info you provide in your articles. I'll bookmark your weblog and check again here frequently.

I am quite sure I will learn a lot of new stuff right
here! Good luck for the next!

#66 badoo superpowers free on 05.18.19 at 7:28 am

Thanks for this article. I definitely agree with what you are saying.

#67 led ryggsäck on 05.18.19 at 2:22 pm

Good, this is what I was searching for in google

#68 hoger in google on 05.18.19 at 9:30 pm

I have been exploring for a bit for any high quality articles or blog posts in this
kind of area . Exploring in Yahoo I eventually stumbled
upon this web site. Reading this information So i'm happy to convey that I have a very excellent
uncanny feeling I came upon just what I needed. I so much certainly
will make sure to don?t forget this web site and provides it
a glance on a continuing basis.

#69 TDS Ranger 300X Calibration on 05.18.19 at 10:21 pm

Stunning story there. What occurred after?
Thanks!

#70 mining simulator codes 2019 on 05.19.19 at 6:20 am

I conceive you have mentioned some very interesting details , appreciate it for the post.

#71 Trimble S3 Data Sheet on 05.19.19 at 10:28 pm

Hello, I enjoy reading through your post. I wanted to write
a little comment to support you.

#72 smutstone on 05.20.19 at 11:00 am

Thanks for this website. I definitely agree with what you are saying.

#73 gamefly free trial on 05.20.19 at 3:43 pm

Hi colleagues, how is the whole thing, and what you
wish for to say about this post, in my view its truly amazing in favor
of me.

#74 vx dr plus on 05.20.19 at 11:50 pm

Howdy! Quick question that's completely off topic.
Do you know how to make your site mobile friendly?
My website looks weird when viewing from my
iphone 4. I'm trying to find a theme or plugin that might
be able to fix this problem. If you have any recommendations, please share.
Many thanks!

#75 redline v3.0 on 05.21.19 at 6:28 am

I conceive this web site holds some real superb information for everyone : D.

#76 Leica GS15 on 05.21.19 at 6:38 am

Hello there! This is kind of off topic but I need some advice from an established
blog. Is it difficult to set up your own blog? I'm not very
techincal but I can figure things out pretty fast.
I'm thinking about setting up my own but I'm not sure where to begin. Do you have any ideas or
suggestions? Cheers

#77 free fire hack version unlimited diamond on 05.21.19 at 3:39 pm

Hello, google lead me here, keep up nice work.

#78 TS06 Manual on 05.21.19 at 4:21 pm

I have been surfing on-line greater than three hours these days, yet I never discovered any interesting article like yours.
It is pretty value sufficient for me. In my view, if all webmasters and bloggers made just right content as you did, the net will likely be a lot more useful than ever before.

#79 Trimble 5602 Brochure on 05.21.19 at 9:49 pm

An outstanding share! I've just forwarded this onto a co-worker who was doing
a little homework on this. And he in fact ordered me breakfast
simply because I stumbled upon it for him… lol. So let me
reword this…. Thank YOU for the meal!! But yeah, thanks for spending time to discuss this matter here on your web page.

#80 nonsense diamond on 05.22.19 at 5:30 pm

Respect to website author , some wonderful entropy.

Leave a Comment