Will Amazon FBA Warehosues work in with the Domestic Backup Warehouse workflow? - settings

Does the FBA Warehouse function the same as a 3PL/In House Warehouse in the context of Domestic Backup Warehouse Fulfillment?

Yes - FBA Warehouses will work with the Domestic Backup Warehouse Workflow
Notes:
You will need to have the orderbot sequence setup
Need to couple it with an MCF Orderbot for the orders to automatically send to amazon. Otherwise they would just be in awaiting MC Fulfillment
Additional orderbots needed "If OOS", "If partially in stock" etc
Link to confirmation in slack

Related

How to handle data migrations in distributed microservice databases

so im learning about microservices and common patterns and i cant seem to find how to address this one issue.
Lets say that my customer needs a module managing customers, and a module managing purchase orders.
I believe that when dealing with microservices its pretty natural to split these two functionalities into separate services - each having its own data.
CustomerService
PurchaseOrderService
Also, he wants to have a table of purchase orders displaying the data of both customers and purchase orders, ie .: Customer name, Order number.
Now, i dont want to use the API Composition pattern because the user must be able to sort over any column he wants which (afaik) is impossible to do without slaughtering the performance using that pattern.
Instead, i choose CQRS pattern
after every purchase order / customer update a message is sent to the message broker
message broker notifies the third service about that message
the third service updates its projection in its own database
So, our third service .:
PurchaseOrderTableService
It stores all the required data in the single database - now we can query it, sort over any column we like while still maintaining a good performance.
And now, the tricky part .:
In the future, client can change his mind and say "Hey, i need the purchase orders table to display additional column - 'Customer country'"
How does one handle that data migration? So far, The PurchaseOrderTableService knows only about two columns - 'Customer name' and 'Order number'.
I imagine that this probably a pretty common problem, so what can i do to avoid reinventing the wheel?
I can of course make CustomerService generate 'CustomerUpdatedMessage' for every existing customer which would force PurchaseOrderTableService to update all its projections, but that seems like a workaround.
If that matters, the stack i thought of is java, spring, kafka, postgresql.
Divide the problem in 2:
Keeping live data in sync: your projection service from now on also needs to persist Customer Country, so all new orders will have the country as expected.
Backfill the older orders: this is a one off operation, so how you implement it really depends on your organization, technologies, etc. For example, you or a DBA can use whatever database tools you have to extract the data from the source database and do a bulk update to the target database. In other cases, you might have to solve it programmatically, for example creating a process in the projection microservice that will query the Customer's microservice API to get the data and update the local copy.
Also note that in most cases, you will already have a process to backfill data, because the need for the projection microservice might arrive months or years after the orders and customers services were created. Other times, the search service is a 3rd party search engine, like Elastic Search instead of a database. In those cases, I would always keep in hand a process to fully reindex the data.

Advice on Setup

I started my first data analysis job a few months ago and I am in charge of a SQL database and then taking that data and creating dashboards within Power BI. Our SQL database is replicated from an online web portal we use for data entry. We do not add data ourselves to the database but instead the data is put into tables based on the data entered into the web portal. Since this database is replicated via another company, I created our own database that is connected via linked server. I have built many views to pull only the needed data from the initial database( did this to limit the amount of data sent to Power BI for performance). My view count is climbing and wondering in terms of performance, is this the best way forward. The highest row count of a view is 32,000 and the lowest is around 1000 rows.
Some of the views that I am writing end up joining 5-6 tables together due to the structure built by the data web portal company that controls the database.
My suggestion would be to create a Datawarehouse schema ( star schema ) keeping as principal, one star schema per domain. For example one for sales, one for subscriptions, one for purchase, etc. Use the logic of Datamarts.
Identify your dimensions and your facts and keep evolving that schema. You will find out that you will end up with a much fewer number of tables.
Your data are not that big so you can use whatever ETL strategy you like.
Truncate load or incrimental.

Can Data Replication Deliver/Push One of Two Set Data to Client Nodes?

I step into a retail system merge project lately. A retail chain company acquires a far smaller different business retail chain company. The company decides to modify their retail system so that it also can be used in the acquired retail stores. Their retail system is built with the SAP retail application and Oracle Data replication with a store inventory application. They have one set of DB tables under one schema for read-only in the store application and another set of DB tables under another schema for data generated in their store application. In other words, the first set of DB table is for inbound data and the second set of DB tables for both outbound and inbound data from a store point of view.
The SDEs who built the store application suggest adding a new column, store type, to multiple tables for the inbound data to differentiate the two different retail business system data. For example, they want to add a store type column to their vendor table. To my understanding, data replication shall/can set up so that only related data is sent to a client node. For example, a store of one of their retail business system shall receive vendor inbound data for the business, but not any vendor data for the other system. If so, why a new column is needed? Those SDEs are not experts of data replication. I didn't know anything about data replication until three weeks ago. I don't know whether I miss something on this subject or not.

Multi tenancy with tenant sharing data

I'm currently in the process of making a webapp that sell subscriptions as a multi tenant app. The tech i'm using is rails.
However, it will not just be isolated tenants using the current app.
Each tenant create products and publish them on their personnal instance of the app. Each tenant has it's own user base.
The problematic specification is that a tenant may share its product to others tenants, so they can resell it.
Explanation :
FruitShop sells apple oranges and tomatoes.
VegetableShop sells radish and pepper bell.
Fruitshop share tomatoes to other shops.
VegetableShop decide to get tomatoes from the available list of shared
items and add it to its inventory.
Now a customer browsing vegetableshop will see radish, pepper bell and
Tomatoes.
As you can guess, a select products where tenant_id='vegetableshop_ID' will not work.
I was thinking of doing a many to many relation with some kind of tenant_to_product table that would have tenant_id, product_id, price_id and even publish begin-end dates. And products would be a "half tenanted table" where the tenant ID is replaced by tenant_creator_id to know who is the original owner.
To me it seems cumbersome, adding it would mean complex query, even for shop selling only their own produts. Getting the sold products would be complicated :
select tenant_to_products.*
where tenant_to_products.tenant_ID='current tenant'
AND (tenant_to_products.product match publication constraints)
for each tenant_to_product do
# it will trigger a lot of DB call
Display tenant_to_product.product with tenant_to_product.price
Un-sharing a product would also mean a complex update modifying all tenant_to_products referencing the original product.
I'm not sure it would be a good idea to implement this constraint like this, what do you suggest me to do? Am I planning to do something stupid or is it a not so bad idea?
You are going to need a more complicated subscription to product mechanism, as you have already worked out. It sounds like you are on the right track.
Abstract the information as much as possible. For example, don't call the table 'tenant_to_product', instead call it 'tenant_relationships', and have the product Id as a column in this table.
Then, when the tenant wants to have services, you can simply add a column to this table 'service Id' without having to add a whole extra table.
For performance, you can have a read-only database server with tenant relationships that is updated on a slight delay. Azure or similar cloud services would make this easy to spin up. However, that probably isn't needed unless you're in the order of 1 million+ users.
I would suggest you consider:
Active/Inactive (Vegetable shop may prefer to temporarily stop selling Tomatoes, as they are quite faulty at the moment, until the grower stops including bugs with them)
Server-side services for notification, such as 'productRemoved' service. These services will batch-up changes, providing faster feedback to the user.
Don't delete information, instead set columns 'delete_date' and 'delete_user_id' or similar.
Full auditing history of changes to products, tenants, relationships, etc. This table will grow quite large, so avoid reading from it and ensure updates are asynchronous so that the caller isn't blocked waiting for the table to update. But it will probably be very useful from a business perspective.
EDIT:
This related question may be useful if you haven't already seen it: How to create a multi-tenant database with shared table structures?
Multi-tenancy does seem the obvious answer as you are providing multiple clients your system.
However as an alternative, perhaps consider a reseller 'service layer', this would enable a degree of isolation whilst still offering integration. Taking inspiration to how reseller accounts work with 3rd parties like Amazon.
I realise this is very abstract reasoning but perhaps considering the integration at a higher tier than the data layer could be of benefit.
From experience strictly enforcing multi-tenancy at a data layer level we have found that tenancy sometimes has to be manipulated at a business layer (like your reseller ideas) to such a degree that the tenancy becomes a tricky concept. So considering alternatives early on could help.

Magento geo-location and multiple warehouses.

This is a complex issue. So if you like to work on tough problems then this situation is for you. I'm running a Magento store that sells tires. I get tires from multiple warehouses. Some warehouses have unique tires but many of the warehouses carry the same tire that other warehouses have but, they are sold to me at different prices.
All warehouses will deliver my tires to the customer locally and all warehouses will UPS ship my tires anywhere.
Here is the problem. While a customer is shopping they select their tire. The results could have several of the same tire from many warehouses. They need to be able to pick the one that is from their local warehouse so that warehouse can deliver with their trucks.
But, if there is no tire from the local warehouse then the customer will be okay with UPS. Since tires are expensive to ship, the tire needs to come from the closest warehouse for UPS delivery.
I am trying to come up with the best way to set up the catalog and tie it into this unique shipping situation. Shopping by Geo-location and multi warehouses. I would lobe to have each warehouse in their own database. If not then I'll use a prefix on each sku. Any ideas? Thanks in advance. JJB
Shopping by Geo-location and multi warehouses.
1a You should setup the multiple websites. Multiple websites allow you multiple items to have different stock.
1b Also you may buy some of the modules which allow your items to have multiple stock.
2 I don't know if you need it, but there are such solutions like mod_geoip for Apache.
I found Multi-Warehouse custom module in Magento connect that might be a solution.

Resources