I have a multilingual site (culture depends on the host URL eg.: uk.site.com, fr.site.com etc.). I'm caching a partial in the frontend and set its cache key. Then from the model I do something like:
$cacheManager = sfContext::getInstance()->getViewCacheManager();
$cacheUri = $cacheManager->getPartialUri($module, $action, $cacheKey);
$cacheManager->remove($cacheUri, *); //second param is to clear this cache for all hosts
// $cacheManager->remove($cacheUri); when is like that it works for fine but only for the domain it's been called from.
The code above calls the sf core method sfMemCache->removePattern($pattern) which has:
$regexp = self::patternToRegexp($this->getOption('prefix').$pattern);
foreach ($this->getCacheInfo() as $key)
{
if (preg_match($regexp, $key))
{
$this->remove(substr($key, strlen($this->getOption('prefix'))));
}
}
and $this->getCacheInfo() is always empty and it can't clear anything.
It always throw "To use the "removePattern" method, you must set the "storeCacheInfo" option to "true"." exception. I cant find where does that cacheInfos() has to be filled or what exactly is its role.
Simplified version of my question is: "Why is $this->getCacheInfo empty"
If the removePattern method is throwing that error then memcached is not configured correctly in your factories.yml configuration.
The answer you want is discussed on this page under the heading 'Alternative Caching storage'
You need to set your class and parameter correctly to use the removePattern method. For instance:
view_cache:
class: sfMemcacheCache
param:
host: localhost
port: 11211
persistent: true
storeCacheInfo: true
You will also need to insure that the memcached daemon is operational. If memcached is running on the same host as the script in it's default configuration then the above settings should be fine. If you are running memcached on a separate host or you have multiple hosts sharing a memcached service then you must insure your host, port and firewall settings are correct for your environment.
Related
Most terraform providers demand a predefined flow, Create/Read/Update/Delete/Exists
I am in a weird situation developing a provider against an API where this behavior diverges a bit.
There are two kinds of resources, Host and Scope. A host can have many scopes. Scopes are updated with configurations.
This generally fits well into the terraform flow, it has a full CRUDE flow possible - except for one instance.
When a new Host is made, it automatically has a default scope attached to it. It is always there, cannot be deleted etc.
I can't figure out how to have my provider gracefully handle this, as I would want the tf to treat it like any other resource, but it doesn't have an explicit CREATE/DELETE, only READ/UPDATE/EXISTS - but every other scope attached to the host would have CREATE/DELETE.
Importing is not an option due to density, requiring an import for every host would render the entire thing pointless.
I originally was going to attempt to split Scopes and Configurations into separate resources so one could be full-filled by the Host (the host providing the Scope ID for a configuration, and then other configurations can get their scope IDs from a scope resource)
However this approach falls apart because the API for both are the same, unless I wanted to add the abstraction of creating an empty scope then applying a configuration against it, which may not be fully supported. It would essentially be two resources controlling one resource which could lead to dramatic conflicts.
A paraphrased example of an execution I thought about implementing
resource "host" "test_integrations" {
name = "test.integrations.domain.com"
account_hash = "${local.integrationAccountHash}"
services = [40]
}
resource "configuration" "test_integrations_root_configuration" {
name = "root"
parent_host = "${host.test_integrations.id}"
account_hash = "${local.integrationAccountHash}"
scope_id = "${host.test_integrations.root_scope_id}"
hostnames = ["test.integrations.domain.com"]
}
resource "scope" "test_integrations_other" {
account_hash = "${local.integrationAccountHash}"
host_hash = "${host.test_integrations.id}"
path = "/non/root/path"
name = "Some Other URI Path"
}
resource "configuration" "test_integrations_other_configuration" {
name = "other"
parent_host = "${host.test_integrations.id}"
account_hash = "${local.integrationAccountHash}"
scope_id = "${host.test_integrations_other.id}"
}
In this example flow, a configuration and scope resource unfortunately are pointing to the same resource which I am worried would cause conflicts or confusion on who is responsible for what and dramatically confuses the create/delete lifecycle
But I can't figure out how the TF lifecycle would allow for a resource that would only UPDATE/READ/EXISTS if say a flag was given (and how state would handle that)
An alternative would be to just have a Configuration resource, but then if it was the root configuration it would need to skip create/delete as it is inherently tied to the host
Ideally I'd be able to handle this situation gracefully. I am trying to avoid including the root scope/configuration in the host definition as it would create a split in how they are written and handled.
The documentation for providers implies you can use a resource AS a schema object in a resource, but does not explain how or why. If it works the way I imagine it, it may work to create a resource that is only used to inject into the host perhaps - but I don't know if that is how it works and if it is how to accomplish it.
I believe I tentatively have found a solution after asking some folks on the gopher slack.
Using AWS Provider Default VPC as a reference, I can "clone" the resource into one with a custom Create/Delete lifecycle
Loose Example:
func defaultResourceConfiguration() *schema.Resource {
drc := resourceConfiguration()
drc.Create = resourceDefaultConfigurationCreate
drc.Delete = resourceDefaultConfigurationDelete
return drc
}
func resourceDefaultConfigurationCreate(d *schema.ResourceData, m interface{}) error {
// double check it exists and update the resource instead
return resourceConfigurationUpdate(d, m)
}
func resourceDefaultConfigurationDelete(d *schema.ResourceData, m interface{}) error {
log.Printf("[WARN] Cannot destroy Default Scope Configuration. Terraform will remove this resource from the state file, however resources may remain.")
return nil
}
This should allow me to provide an identical resource that is designed to interact with the already existing one created by its parent host.
I'm trying to dynamically create Compute instances on Google Cloud via PHP using API.
Using csmartinez Cloud API sample, I managed to create my instance and can see it running on Google Console.
I need to assign an external IP address to this newly created instance. Based on the API and Google PHP library, I then added:
$ipName = "foo";
$addr = new Google_Service_Compute_Address();
$addr->setName($ipName);
$response = $service->addresses->insert($project, $ipZone, $addr);
Since this call has a "pending" status at first, I added a sleep(5) so I can then get the newly created static IP:
$response = $service->addresses->get($project, $ipZone, $ipName);
$ip = $response->address;
which runs well and gives me the correct IP address. I then continue and try to create my instance while assigning the new IP:
$networkConfig = new Google_Service_Compute_AccessConfig();
$networkConfig->setNatIP($ip);
$networkConfig->setType("ONE_TO_ONE_NAT");
$networkConfig->setName("External NAT");
$googleNetworkInterfaceObj->setAccessConfigs($networkConfig);
The static IP is created, the instance is created, but IP is not assigned to the instance.
To remove my doubt about the pending status, I also tried to assign an already created static IP to my instance, thus using:
$networkConfig->setNatIP("xxx.xxx.xxx.xxx");
With no more success... What am I missing here ?
I think this line:
$googleNetworkInterfaceObj->setAccessConfigs($networkConfig);
should be:
$googleNetworkInterfaceObj->setAccessConfigs(array($networkConfig));
If this doesn't work, there might be another error somewhere else.
this would be the whole procedure, which assigns a network:
$instance = new Google_Instance();
$instance->setKind("compute#instance");
$accessConfig = new Google_AccessConfig();
$accessConfig->setName("External NAT");
$accessConfig->setType("ONE_TO_ONE_NAT");
$network = new Google_NetworkInterface();
$network->setNetwork($this->getObjectUrl($networkName, 'networks', $environment->getPlatformConfigValue(self::PROJECT_ID)));
$network->setAccessConfigs(array($accessConfig));
$instance->setNetworkInterfaces(array($network));
$addr->setName() is barely cosmetic; try $addr->setAddress($ipAddress).
guess the Google_NetworkInterface would need the $addr assigned.
sorry, currently have no spare IP and do not want to pay only to provide code.
I am using local memcache server for storing values. It works fine if I go through defining Memcache as selected driver for Cache. in config/cache.php However, if I use memcache outside laravel the memcache accessing is much faster than within Laravel controllers using Cache::get( ).
I need to store decent amount of data in Memcache and will be accessed across the system. So I was trying to use memcache directly but I am getting Following error.
[2016-08-23 14:11:19] local.ERROR: Symfony\Component\Debug\Exception\FatalThrowableError: Class 'App\Http\Controllers\Memcache' not found in....
My Code is as following :
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Http\Requests;
use Cache;
use stdClass;
use DB;
use Memcache;
class InternalCommunication extends Controller
{
public function update_stock_prices_memcache()
{
echo "\n before the memcache obj creation ".microtime(true);
$memcache = new Memcache();
$memcache->connect('localhost', 11211) or die ("Could not connect");
//$res1 = $memcache->set('key1',"Some value 2");
$res1 = $memcache->get('key1');
.....
Just to be clear - memcache packages are installed and working fine, as I could get it working via Cache: as well as directly accessing memcache from outside Laravel installation.
Appreciate any sort of help I can get.
As per laravel cache docs, you need to set memcached configuration in config/cache.php, and specify driver to use as "memcached".
Then, simply use \Cache as below example.
// to get value
$value = \Cache::get('key');
// to set value
$minuts = 30;
\Cache::put('key', 'value', $minutes);
You could also specify driver on your code if have multiple caches
$value = \Cache::store('memcached')->get('key');
I was able to figure out the required changes needed to access the memcache within Laravel. Following is the code that is working smoothly for me now.
use Memcached;
....
$memcache = new Memcached;
$memcache->addServer('localhost', 11211) or die ("Could not connect");
$res1 = $memcache->get('key1');
....
This is definitely faster than Cache::get using memcache driver!
I want list of all networking statistics like network connections, routing tables and a number of network interface. How can I do this in Ruby?
I noted, you've tagged this as JRuby as well ... you can get some of the info using Java APIs.
but do not expect to get full routing as (netstat gives you) in the end a gem (or plugin) will use a system call anyways (at least on Java there's no way do get the tables cross-platform) ...
ifaces = java.net.NetworkInterface.getNetworkInterfaces.to_a # all you have
name = ifaces[0].name # name = "number" of interface e.g. "eth0"
ips = ifaces[0].inet_addresses.map { |addr| addr.to_s } # likely IP6/IP4
Explore the API: http://docs.oracle.com/javase/6/docs/api/java/net/NetworkInterface.html
... you might find some of the "primitive" routing stuff with a few tricks, but not all :
Is it possible to get the default gateway IP and MAC addresses in java?
Java finding network interface for default gateway
As of newer version of Doctrine2, I know there is Doctrine\ORM\Configuration#setHydrationCacheImpl()
to pass such as MemcacheCache, etc.
But how can it be done in container?
I'm using two entity_manager: named "default" and "other".
I first tried defining hydration_cache into config.yml like
doctrine:
orm:
default_entity_manager: default
...
entity_managers:
default:
...
metadata_cache_driver:
type: service
id: memcache_driver
...
hydration_cache_driver:
type: service
id: memcache_driver
...
other:
...
note: where memcache_driver is defined by me, instanceof Doctrine\Common\Cache\MemcacheCache
then I got Unrecognized options "hydration_cache_driver" under "doctrine.orm.entity_managers.default".
I also tried to directly tweak container in AppKernel#buildContainer,
but there's no instances of \Doctrine\ORM\Configuration defined as service,
so I couldn't retrieve the Configuration instance.
Any suggestions are welcome.
EDIT:
I'm sure that there is feature for caching hydrated object is re-implemented as of Doctrine 2.2.2.
http://www.doctrine-project.org/jira/browse/DDC-1766
https://github.com/doctrine/doctrine2/blob/2.2.2/tests/Doctrine/Tests/ORM/Functional/HydrationCacheTest.php?source=c
For other simple services, I can easily add methods to call by overwriting whole definitions like
service1:
...
calls:
[method calls]
but for the entity_manager, I'm not sure how to add method calls to them.
So my question in other words, how to configure orm at lower level like without using semantic configuration?
In my case, as hydration cache is hardly used,
so I decided this time to call Query#setHydrationCacheProfile just before each query is executed.
...
$query = $queryBuilder->getQuery();
$cache = $this->container->get('...'); //instanceof MemcacheCache
$query->setHydrationCacheProfile(new CacheProfile(null, null $cache));
$query->execute();
...
There is no such option "hydration_cache_driver", you should use "result_cache_driver" to achieve that.
From Doctrine 2.1, Doctrine can cache results of the queries, but it doesn't cache objects after hydration.
Look at doc about doctrine configuration:
http://symfony.com/doc/master/reference/configuration/doctrine.html