怎样获得IP段

2018年11月16日 由 Amon 没有评论 »

【无限IP解决方案】

购买:http://www.ctohome.com/FuWuQi/c9/553.html

有这个多IP的需求怎么办呢?服务器/VPS需要很多IP但机房不批怎么办?如何购买不同C段IP地址?

“无限IP解决方案”,原理介绍和实施步骤:

1. 我们想办法从同一个机房或其他机房购买新的服务器,然后用这个新的服务器去申请最大许可的IP数目。
2. 您的网站还是放在您自己的服务器上,并且让他解析到您服务器上的某个IP能够正常访问。
3. 通知我们给您做新IP的设置(我会在新购买的服务器上给您的网站做好IP设置和数据同步,类似CDN技术)。
4. 您将域名改解析到新IP
5. 完成。

好处:

1. 不管您原来的vps或服务器最多允许多少IP, 通过上述的方法,我们有办法将您的服务器扩展到无限多个IP,让这些IP都可以个您使用。
2. IP地址数目可以不断扩充,而且可以经常有不同C段的IP(不保证),不同机房的IP甚至B段和A段都不同,这样非常有利于seo
3. 由于访问者都是访问的新IP,可以分担/降低您原来服务器的负载和流量使用。

美国IP: 30元/个/月,理论上可购买数目不限制,但是我们建议:单个vps最多购买50个IP。 单个服务器最多购买200个IP。

欧洲IP: 100元/个/月,理论上可购买数目不限制,但是我们建议:单个vps最多购买10个IP。 单个服务器最多购买50个IP。

怎样使用history命令输出历史指令记录

2018年11月14日 由 Amon 没有评论 »

参考:https://www.jb51.net/LINUXjishu/68187.html
参考:https://blog.csdn.net/a806267365/article/details/40581159

Linux系统在shell(控制台)中输入并执行命令时,shell会自动把命令记录到历史列表中,一般保存在用户目录下的 .bash_history 文件中。默认保存1000条,这个值也可被修改。

history命令主要用于显示历史指令记录内容, 下达历史纪录中的指令。

列出最近的 n 笔命令列表:

history [n]

将目前的shell中的所有 history 内容全部消除:

history [-c]

-a :将目前新增的history 指令新增入 histfiles 中,若没有加 histfiles ,则预设写入 ~/.bash_history
-r :将 histfiles 的内容读到目前这个 shell 的 history 记忆中
-w :将目前的 history 记忆内容写入 histfiles

history [-raw] histfiles

history 会列出 bash 保存的所有历史命令,并且给它们编了号,可以使用“叹号接编号”的方式运行特定的历史命令:

[!number]  [!command] [!!]

参数说明:
number :第几个指令的意思;
command :指令的开头几个字母
! :上一个指令的意思!

执行历史清单中的第99条命令:

#!99

重复执行上一个命令:

#!!

执行最后一次以rpm开头的命令(!? ?代表的是字符串,这个String可以随便输,Shell会从最后一条历史命令向前搜索,最先匹配的一条命令将会得到执行):

#!rpm

逐屏列出所有的历史记录:

history | more

怎样保证WordPress MU + BuddyPress的性能

2018年11月14日 由 Amon 没有评论 »

《How to scale WordPress to half a million blogs and 8,000,000 page views a month》

原文:https://premium.wpmudev.org/blog/scaling-wordpress-wpmu-buddypress-like-edublogs/

We figured it was about time we shared some of the lessons we’ve learned scaling Edublogs to nearly half a million blogs and a place in the Quantcast top 5000 sites! So if you have grand plans for your site (or want to improve your existing setup / performance) read on and feel free to ask any questions :)

The fundamental principle in scaling a large WordPress installation runs along the same basic principles of scaling any large site.

The key component is to truly understand your application, the architecture and the potential areas of contention. For WordPress specifically, the two key points of contention and work is the page-generation time as well as the time spent with the database.

Database Layer:

Given the flexibility of WordPress, the database is the storage point not only for the “larger” items, such as users, posts, comments, but also for many little options and details. The nature of WordPress is that it may make many round trip calls to the database to load many of these options — each requiring database and network resources.

The first-level of “defense” on overloading the database would be to use the MySQL Query Cache.

The Query Cache is a nifty little feature in MySQL, where it stores — in a dedicated are within main memory — any results of a query for a table which has not recently changes.

That is, assuming a request comes in to retrieve a specific row in a table — and that table has not recently been modified in any way — and the cache has not filled up requiring purging/cleaning — the query/data can be satisfied from this cache. The major benefit here of course is the to satisfy the request — the database does not need to go to the disk (which is generally the slowest part of the system) and can be immediately satisfied.

Memory

The other major boost for the database would be to keep the working set in memory. The working set is loosely defined as the current set of data which will be aggressively referenced in a period of time. Your database can have 500GB worth of data — but the working set — the data actually needed NOW [and in the next N amount of time] is only 5GB.

If you can keep that 5GB within memory (either using generous key-caches & system I/O buffers for MyISAM or a large Buffer Pool for InnoDB) will of course reduce the required round-trip-time to the disk. If the contention in the database is write related, consider changing the storage engine for the WordPress tables to InnoDB. Depending on the number of tables — this can lead to memory starvation, so approach with caution.

Disks

The last point on databases is disks. In the even the working set doesn’t fit in memory (which is most of the time usually), have the the disk sub-system be as quick as possible. Trade in those “ultra-fast 3.0GB SATA” disks for high-speed SCSI disks. Consider a striped array (RAID-0) — but for safeties sake let it be a RAID-10. Spread the workload over multiple disks: for 150GB of disk space, consider getting several 50GB disks so that a large throughput can be obtained. If you will be doing heavy writes to this disk-subsystem, a battery-backed write-back cache. The throughput will be a lot higher.

The really nice “defense mechanism” for the database is to avoid the database all-together. As mentioned earlier, per-page WordPress tends to make many many database calls. If these calls can be drastically reduced or eliminated the database time goes down and page-generation time goes up. This is usually done by using memcached.

There are two types of cache: object-cache (which are loosely defined as be being things like options, settings, counts, etc.) and full-page cache. A full-page cache is a fully-generated page (HTML output and all) which is stuffed into cache. This type of cache of course virtually eliminates page-generation time altogether.

We should not forget to mention MySQL slave replication. If your single database server cannot keep up — consider using MySQL replication and using a plugin like MultiDB or HyperDB to split the reads and the writes. Keep in mind that you will always have to write to a single database — but should be able to read from many/any.

Page-Generation Time

WordPress spends a considerable amount of time compiling and generating the resultant HTML page ultimately served to the client. For many, the typical choice is using a server like Apache — which with its benefits also brings some limitations. By default, in Apache the PHP processes are built into the processes serving all pages on the site — regardless if they are PHP or not.
Guarantee Stamp 1.6 million WordPress Superheroes read and trust our blog. Join them and get daily posts delivered to your inbox – free!
Email address

By using an alternate web server (e.g. nginx, lighttpd, etc.) you essentially “box-in” all PHP requests — and send them directly to a PHP worker pool which can work on the page-generation part of the request. This leaves the web server free to continue serving static files — or anything else it needs to. Unlike Apache, the PHP worker pool does not even need to reside on the same physical server as the web server. The most widely used implementation is using PHP as a FastCGI process (with the php-fpm patches applied).

File Storage

When using multiple web-tier servers to compile and generate WordPress pages, one of the issues encountered is uploaded multi-media. In a single-server install, the files get placed into the wp-content/blogs.dir folder and we forget about it. If we introduce more than one server — we need to be careful that we no longer store these data files locally as they will not be accessible from the other servers.

To work around this issue, consider having a dedicated or semi-dedicated file server running a distributed file-system (NFS, AFS, etc.). When a user uploads a file, write it to the shared storage — which makes it accessible to all connected web-servers. Alternatively, you may opt to upload it to Amazon S3, Rackspace CloudFiles or some other Content Delivery Network. Either way, the key here is to make sure the files are not going to be local to a single web-server — as if they are — they will not be know to other servers.

On a distributed file-system, refrain — or never — serve files off this system directly. Place a web-server or some other caching services (varnish, squid) who is responsible from reading the data off the shared storage device and returning it to the web server for sending back to the client. One advantage of using something like varnish is that you can create a fairly large and efficient cache — in front of the shared file system. This allows the file-system to focus on serving new files and leaving the highly-requested files to the cache to serve.

Semi-static requests

For requests which can be viewed as semi-static, treat them so. Requests such as RSS feeds, although are technically updated and are available immediately following the publishing of a post, comment, etc. consider caching those for a period of time (5 minutes or so) in a caching proxy such as varnish, squid, etc. This way you can have a high number of requests for things like RSS feeds be satisfied almost for “free” — as they only need to be generated once and then fed by the cache hundreds or thousands of times.

What we use at Edublogs:

3x web-tier servers
2x database servers
1x file server

The web-tier service each has an nginx running, a php-fcgi pool and a memcached instance. The Edublogs.org name resolves to three IP addresses – each being fronted by one of the nginx servers. The nginx is configured to distribute the PHP requests to one of the three servers (itself or the other two in the pool).

The database servers in this case are functioning as a split-setup. The heavier traffic (e.g. blog content) is stored on one set of servers and the global data is stored on a separate set. “Global” data can be thought of options, settings, etc.

The file server is fronted by a varnish pool and connected via NFS to all three web servers. Each web server has a local copy of the PHP files which comprise the site (no reading off of NFS). The user uploads a multi-media file which then gets copied over to the NFS mounts. Upon subsequent requests — the data is server in return by varnish (who also caches it for future requests).

Global Tables, InnoDB & Memcache

The global tables are InnoDB as there are not that many of them and thus have better performance. One of the primary reasons for the individual blog tables are not InnoDB is because of InnoDB data dictionary issues. For large amounts of tables the dictionary can become too large and exhaust all memory on the system. Though there are patches available to change this behavior — the individual tables are still mostly read-only which MyISAM does quite well.

As for caching: We use the memcached-backed object cache and on top of that we also use Batcache (which utilizes the memcached-backed object cache).

We hope that helps… and special shout out to our SysAdmin Michael who pretty much wrote this guide :)

WordPress插件:网址导航

2018年11月14日 由 Amon 没有评论 »

名称:ULC
全称:Useful Link Collections
描述:Plugin allow you to create useful links collection or favorite bookmarks and share links list with visitors.
作者:https://codecanyon.net/user/mzworks/

怎样构建一个站内搜索引擎

2018年11月9日 由 Amon 没有评论 »

背景:一论坛具有千万记录规模的数据,并实时增长,通过 mysql 内建全文检索(耗时 40s 以上);原有搜索机制已无法满足站内搜索需要。

方案:对站内数据采集入库,并建立索引。通过 PHP+Python 构建多进程采集端,通过 Redis 实现多个服务器分布式并发采集,入库后采用 Sphinx 建立全文检索数据,使用 Bootstrap 框架 + PHP 上线网站。将搜索时间降低至 0.01s 以下。

前端:JQuery/Bootstrap
后端:PHP/Python/Apache/Memcache
全文检索: Sphinx/Coreseek
版本管理: Git/SVN
数据可视化: Gephi/SPSS